Tag: security

More on the mechanics of GDPR

Note: I’m writing this post on my personal blog as I’m still learning about GDPR. This is me thinking out loud, rather than making official Moodle pronouncements.


‘Enjoyment’ and ‘compliance-focused courses’ are rarely uttered in the same breath. I have, however, enjoyed my second week of learning from Futurelearn’s course on Understanding the General Data Protection Regulation. This post summarises some of my learning and builds upon my previous post.

This week, the focus was on the rights of data subjects, and started with a discussion about the ‘modalities’ by which communication between the data controller and processor, and the data subject take place:

By modalities, we mean different mechanisms that are used to facilitate the exercise of data subjects’ rights under the GDPR, such as those relating to different forms of information provision (in writing, spoken, electronically) and other actions to be taken when data subjects invoke their rights.

Although the videos could be improved (I just use the transcripts) the mix of real-world examples, quizzes, and reflection is great and suits the way I learn best.

I discovered that the GDPR not only makes provision for what should be communicated by data controllers but how this should be done:

In the first place, measures must be taken by data controllers to provide any information or any communication relating to the processing to these individuals in a concise, transparent, intelligible and easily accessible form, using the language that is clear and plain. For instance, it should be done when personal data are collected from data subjects or when the latter exercise their rights, such as the right of access. This requirement of transparent information and communication is especially important when children are data subjects.

Moreover, unless the data subject is somehow attempting to abuse the GDPR’s provisions, the data controller must provide the requested information free of charge.

The number of times my surname is spelled incorrectly (often ‘Bellshaw’) or companies have other details incorrect, is astounding. It’s good to know, therefore, that the GDPR focuses on rectification of individuals’ personal data:

In addition, the GDPR contains another essential right that cannot be disregarded. This is the right to rectification. If controllers store personal data of individuals, the latter are further entitled to the right to rectify, without any undue delay, inaccurate information concerning them. Considering the purpose of the processing, any data subject has the right to have his or her personal data completed such as, for instance, by providing a supplementary statement.

So far, I’ve focused on me as a user of technologies — and, indeed, the course uses Google’s services as an example. However, as lead for Project MoodleNet, the reason I’m doing this course is as the representative of Moodle, an organisation that would be both data controller and processor.

There are specific things that must be built into any system that collects personal data:

At the time of the first communication with data subjects, the existence of the right to object– as addressed earlier– must be indicated to data subjects in a clear manner and separately from other information. This right can be exercised by data subjects when we deal with the use of information society services by automated means using technical specifications. Importantly, the right to object also exists when individuals’ personal data are processed for scientific or historical research or statistical purposes. This is, however, not the case if the processing is carried out for reasons of public interest.

Project MoodleNet will be a valuable service, but not from a scientific, historical, or statistical point of view. Nor will the data processing be carrierd out for reasons of public interest. As such, the ‘right to object’ should be set out clearly when users sign up for the service.

In addition, users need to be able to move their data out of the service and erase what was previously there:

The right to erasure is sometimes known as the right to be forgotten, though this denomination is not entirely correct. Data subjects have the right to obtain from data controllers the erasure of personal data concerning them without undue delay.

I’m not entirely clear what ‘undue delay’ means in practice, but when building systems, we should build it with these things in mind. Being able to add, modify, and delete information is a key part of a social network. I wonder what happens when blockchain is involved, given it’s immutable?

The thing that concerns most organisations when it comes to GDPR is Article 79, which states that data subjects have legal recourse if they’re not happy with the response they receive:

Furthermore, we should mention the right to an effective judicial remedy against a controller or processor laid down in Article 79. It allows data subjects to initiate proceedings against data controllers or processors before a court of the Member State of the establishment of controllers or processors or in the Member State where they have their habitual residence unless controllers or processors are public authorities of the Member States and exercise their public powers. Thus, data subjects can directly complain before a judicial institution against controllers and processors, such as Google or others.

I’m particularly interested in what effect data subjects having the right “not to be subjected to automated individual decision-making” will have. I can’t help but think that (as Google has already started to do through granular opt-in questions) organisations will find ways to make users feel like it’s in their best interests. They already do that with ‘personalised advertising’.

There’s a certain amount of automation that can be useful, the standard example being Amazon’s recommendations system. However, I think the GDPR focuses more on things like decisions about whether or not to give you insurance based on your social media profile:

There are three additional rights of data subjects laid down in the General Data Protection Regulation, and we will cover them here. These rights are – the right not to be subjected to automated individual decision-making, the right to be represented by organisations and others, and the right to compensation. Given that we live in a technologically advanced society, many decisions can be taken by the systems in an automatic manner. The GDPR grants to all of us a right not to be subjected to a decision that is based only on an automated processing, which includes profiling. This decision must significantly affect an individual, for example, by creating certain legal effects.

Thankfully, when it comes to challenging organisations on the provisions of the GDPR, data subjects can delegate their representation to a non-profit organisation. This is a sensible step, and prevents lawyers become rich from GDPR challenges. Otherwise, I can imagine data sovereignty becoming the next personal injury industry.

If an individual feels that he or she can better give away his or her representation to somebody else, this individual has the right to contact a not-for-profit association– such as European Digital Rights – in order to be represented by it in filing complaints, exercising some of his or her rights, and receiving compensation. This might be useful if an action is to be taken against such a tech giant as Google or any other person or entity. Finally, persons who have suffered material or non-material damage as a result of an infringement of the GDPR have the right to receive compensation from the controller or processor in question.

Finally, and given that the GDPR applies not only across European countries, but to any organisation that processes EU citizen data, the following is interesting:

The European Union and its Member States cannot simply impose restrictions addressed in Article 23 GDPR when they wish to. These restrictions must respect the essence of the fundamental rights and freedoms and be in line with the requirements of the EU Charter of Fundamental Rights and the European Convention for the Protection of Human Rights and Fundamental Freedoms. In addition, they are required to constitute necessary and proportionate measures in a democratic society meaning that there must be a pressing social need to adopt these legal instruments and that they must be proportionate to the pursued legitimate aim. Also, they must be aiming to safeguard certain important interests. So, laws adopted by the EU of its Members States that seek to restrict the scope of data subjects’ rights are required to be necessary and proportionate and must protect various interests discussed below.

I learned a lot this week which will stand me in good stead as we design Project MoodleNet. I’m looking forward to putting all this into practice!


Image by Erol Ahmed available under a CC0 license

Why I’ve just ditched my cloud-based password manager

TL;DR: I’ve ditched LastPass in favour of LessPass. The former stores your passwords in the cloud and requires a master password. The latter uses ‘deterministic password generation’ to keep things on your own devices.


Although I’ve used LastPass for the past six years, I’ve never been completely happy with it. There have been breaches, and a couple of years it was acquired by LogMeIn, a company not exactly revered in terms of trust and customer service. Their ’emergency break-in’ feature makes me feel that my passwords are just one serious hack or government request away.

I read Hacker News on pretty much a daily basis and I’m particularly interested in the underlying approaches to technology that change over time. There are certain assumptions and habits of mind that come to be questioned which lead to different, usually better, solutions to certain problems. Today, the issue of cloud-based password managers was again on the front page.

From the linked article:

When passwords are stored, they must be encrypted and then retrieved later when needed. Storage, of any type, is a burden. Users are required to backup stored passwords and synchronize them across devices and implement measures to protect the stored passwords or at least log access to the stored passwords for audit purposes. Unless backups occur regularly, if the encrypted password file becomes corrupt or is deleted, then all the passwords are lost.

Users must also devise a “master password” to retrieve the encrypted passwords stored by the password management software. This “master password” is a weak point. If the “master password” is exposed, or there is a slight possibility of potential exposure, confidence in the passwords are lost.

Also:

I believe that password management should only occur locally on end use devices, not on remote systems and not in the client web browser.

Remote systems are outside the user’s control and thus cannot be trusted with password management. These systems may not be available when needed and may not be storing or transmitting passwords correctly. Externally, the systems may seem correct (https, etc.) but behind the scenes, no one really knows what’s going on, how the passwords are being transmitted, generated, stored, or who has access to them.

It’s pretty difficult to argue against these two points. Having felt uneasy for a while, I knew it was time to do something different. It was time to ditch LastPass.

I looked at a couple of different solutions: the one proposed by the author of the above quotations (too complex to set up), as well as one which looked promising, but now seems to be unsupported. In the end, I decided upon LessPass, which has been recommended to me by a few people this year.

How is LessPass different from LastPass? This gif from their explanatory blog post is helpful:

lesspass

All of this happens in the browser, without your data being transmitted anywhere else.

Basically, you enter the following:

  1. Name of the site or thing for which you need a password
  2. Your username
  3. A secret passphrase

…and, from these three pieces of information, LessPass generates a password that you can then copy using complex algorithms and entropy stuff that I don’t understand.

lesspass-explainer

The fact that I don’t understand it is fine, because there are people who do, and the code is Open Source. It can be inspected for bugs and vulnerabilities — unlike the proprietary solution provided by LastPass.

The options button to the bottom-right of the LessPass window gives the user advanced options such as:

  • Length of password
  • Types of character to include in the password
  • Increment number (if you’re forced to rotate passwords regularly)

My favourite LessPass feature, though, solves a nagging problem I’ve had for ages. If you have a long passphrase, then sometimes it can be very easy to mistype it. You don’t want to reveal your obfuscated passphrase to the world, so how can you be sure that you’ve typed it correctly?

lesspass-emoji

Simple! LessPass adds an emoji triplet to the right of the secret passphrase box. You’ll notice that changes as you type and, when you finish, it should always look the same. If it doesn’t, then you’ve mistyped your passphrase.

I’ll be making the transition from LastPass to LessPass over the next few weeks. It’s not as simple as just exporting from one database into another, as the whole point of doing this is that there is no one place that someone can hoover up my passwords.

So my plan of action is:

  1. Every time I use a service, create a new password using LessPass.
  2. Delete existing password in LastPass.
  3. Rinse and repeat until most of my passwords are generated via LessPass.
  4. Delete my LastPass account.
  5. Celebrate my higher levels of personal security.

Questions? Ask away in the comments section!


Photo: Crypt by Christian Ditaputratama under a CC BY-SA license

Some thoughts on Keybase, online security, and verification of identity

I’m going to stick my neck out a bit and say that, online, identity is the most important factor in any conversation or transaction. That’s not to say I’m a believer in tying these things to real-world, offline identities. Not at all.

Trust models change when verification is involved. For example, if I show up at your door claiming to be Doug Belshaw, how can I prove that’s the case? The easiest thing to do would be to use government-issued identification such as my passport or driving license. But what if I haven’t got any, or I’m unwilling to use it? (see the use case for CheapID) In those kinds of scenarios, you’re looking for multiple, lower-bar verification touchstones.

As human beings, we do this all of the time. When we meet someone new, we look for points of overlapping interest, often based around human relationships. This helps situate the ‘other’ in terms of our networks, and people can inherit trust based on existing relationships and interactions.

Online, it’s different. Sometimes we want to be anonymous, or at least pseudo-anonymous. There’s no reason, for example, why someone should be able to track all of my purchases just because I’m participating in a digital transaction. Hence Bitcoin and other cryptocurrencies.

When it comes to communication, we’ve got encrypted messengers, the best of which is widely regarded to be Signal from Open Whisper Systems. For years, we’ve tried (and failed) to use PGP/GPG to encrypt and verify email transactions, meaning that trusted interactions are increasingly taking place in locations other than your inbox.

On the one hand, we’ve got purist techies who constantly question whether a security/identity approach is the best way forward, while on the other end of the spectrum there’s people using the same password (without two-factor authentication) for every app or service. Sometimes, you need a pragmatic solution.

keybase

I remember being convinced to sign up for Keybase.io when it launched thanks to this Hacker News thread, and particularly this comment from sgentle:

Keybase asks: who are you on the internet if not the sum of your public identities? The fact that those identities all make a certain claim is a proof of trust. In fact, for someone who knows me only online, it’s likely the best kind of trust possible. If you meet me in person and I say “I’m sgentle”, that’s a weaker proof than if I post a comment from this account. Ratchet that up to include my Twitter, Facebook, GitHub, personal website and so forth, and you’re looking at a pretty solid claim.

And if you’re thinking “but A Scary Adversary could compromise all those services and Keybase itself”, consider that an adversary with that much power would also probably have the resources to compromise highly-connected nodes in the web of trust, compromise PKS servers, and falsify real-world identity documents.

I think absolutism in security is counterproductive. Keybase is definitionally less secure than, say, meeting in person and checking that the person has access to all the accounts you expect, which is itself less secure than all of the above and using several forms of biometric identification to rule out what is known as the Face/Off attack.

The fight isn’t “people use Keybase” vs “people go to key-signing parties”, the fight is “people use Keybase” vs “fuck it crypto is too hard”. Those who need the level of security provided by in-person key exchanges still have that option available to them. In fact, it would be nice to see PKS as one of the identity proof backends. But for practical purposes, anything that raises the crypto floor is going to do a lot more good than dickering with the ceiling.

Since the Trump inauguration, I’ve seen more notifications that people are using Keybase. My profile is here: https://keybase.io/dajbelshaw. Recently, cross-platform apps for desktop and mobile devices have been added, mearning not only can you verify your identity across the web, but you can chat and share files securely.

It’s a great solution. The only word of warning I’d give is don’t upload your private key. If you don’t know how public and private keys work, then please read this article. You should never share your private key with anyone. Keep it to yourself, even if Keybase claim it will make your life easier.

To my mind, all of this fits into my wider work around Open Badges. Showing who you are and what you can do on the web is a multi-faceted affair, and I like the fact that I can choose to verify who I am. What I opt to keep separate from this profile (e.g. my gamertag, other identities) is entirely my choice. But verification of identity on the internet is kind of a big deal. We should all spend longer thinking about it, I reckon.

Main image: Blondinrikard Fröberg

Taking back control of the web: an easy way to host and run secure open source apps

Sandstorm.io

One of the most frustrating things about Open Source software is the lack of traction some genuinely great projects manage to achieve. There are countless examples of individuals deciding to ‘scratch their own itch’, and writing code that would also improve the lives of hundreds/thousands/millions of people. However, the the technical skills required to get it up-and-running, not to mention the security concerns of getting to scale, are often prohibitive.

That’s where Sandstorm.io comes in. I first heard about the project when I was still at Mozilla as the lead developer led a successful crowdfunding campaign that was supported by many readers of Hacker News. Essentially, it’s a incredibly simple, one-click way to install Open Source web apps. They’re deployed in containers called ‘grains’ which makes apps extremely secure and super-fast.

Sandstorm grains

As you can see, I’ve been playing about with all sorts of apps: note-capturing apps similar to Evernote, kanban tools that mimic the functionality of Trello, alternatives to Slack, ways to seamlessly pipe music to co-workers/conspirators, you name it!

There’s already an impressive selection of apps available in Sandstorm.io, with more being converted on a regular basis. Here’s the ones available at the time of writing:

Sandstorm apps

At the moment, I’m just playing around. I can see a time when I decide to use this across devices and collaboratively with other people. Relying on venture capitalist-backed companies to look after my data, privacy, and security on a long-term basis is probably a bad idea.

While there’ll always be a free tier, during the beta all of the plans are free:

Sandstorm - plans

As you can see, given that the ‘Power User’ plan is currently free, I’ve decided to make full use of it. The apps are blisteringly fast and, when the beta ends, I’ve got the option of either paying for hosting through Sandstorm.io, or hosting it on my own server (free!)

I’d have a play and see what you find. I think you’ll find something interesting, something to convince you that Open Source done right can be just as good, if not better, than proprietary, closed-source, VC-backed products!

Click here to go to Sandstorm.io

So here’s the problem…

Note: I’m kind of riffing off Everything Is Broken here. You should read that first.


I often think about leaving Twitter; about turning my annual Black Ops hiatus into something more… permanent.

The trouble is, I can’t.

I don’t mean in terms of “I don’t have it in me”, or “I’d prefer a better platform”. I mean that, if I did leave Twitter, I wouldn’t be able to fulfil my current role to the standard people have come to expect. In other words, there would be a professional cost to me not using a public, private space to communicate with others.

In fact, the same goes with Skype, Google+, and other proprietary tools: I could switch, but there’s de facto standards at work here. If you don’t use what everyone else does, then you either (a) suffer a productivity hit, or (b) cause other people problems. Sometimes, it’s both.

By a ‘productivity hit’, I mean there’s a cognitive and cultural overhead of using tools outside the norm. I spoke to one person the other day – not a Mozilla employee – who said that their company’s commitment to security, privacy and Open Source software significantly hampers their productivity. In other words, they were trading some ease-of-use and productivity for data ownership, privacy and security.

By ’cause other people problems’ I mean that, particularly in the fast-moving world I inhabit, you don’t want to be slowed down by negotiations around which technology to use. Much as I’d love to migrate to WebRTC-powered apps such as appear.in, the truth is that Skype pretty much works every time. You can rely on almost everyone having it installed.*

It used to be easier to understand. Companies would sell their software which you would install on your computer. Most ‘free’ software was also ‘Open Source’ and available under a permissive license. Now, however, everything is free, and the difference between the following is confusing for the end user:

  • Free as in beer – you get this thing for free, but there’s a catch! (the company is mining and/or selling your personal data to advertisers/insurers)
  • Free as in speech – you get this thing for free, and you can inspect the code and use it for pretty much whatever you want.

As Vinay Gupta often puts it, a lot of the free apps and software we’re accessing these days are a form of legalised spyware. The only reason we don’t call it that is because the software providing the services and doing the spying resides on their servers. Our shorthand for this is ‘the cloud’.

The trouble is, and let’s be honest here, that apart from the big hitters like Ubuntu and Firefox, the the free-as-in-beer software tends to have better UX than the free-as-in-speech software. It’s not enough to have stand-alone apps and software any more – customers demand that services talk to one another. And rightly so. The problem is that unless you’re burning through VC cash or selling user data to advertisers, it’s difficult to fund this kind of stuff. Someone or something has got to pay for the servers.

To conclude, I’m kind of done with thinking of this as an individual problem for me to solve in isolation. Yes, I could sit on an island by myself running BSD and only using super-secure and private apps/services. But I’d be a pariah. What we’ve got here is a cultural, not a technological, problem: it’s something for us all to fix:

It wouldn’t take a total defection or a general revolt to change everything, because corporations and governments would rather bend to demands than die. These entities do everything they can get away with — but we’ve forgotten that we’re the ones that are letting them get away with things.

The above quotation is from the article I suggested that you read at the top of this post. If you still haven’t done so yet, then read it when you finish this one.

Remember: there’s not loads we can do in isolation – especially given the mindboggling complexity of the whole system. But we can talk with others about the situation in which we find ourselves. We can weave it into our conversations. We can join together in solidarity and, where there’s opportunities, we can take informed action.

All of us need to up our game when it comes to the digital literacies and web literacy necessary to operate in this Brave New World. We shouldn’t be embarrassed about this in any way. After all, we’re collectively making it up as we go along.


*I think of Skype a bit like LinkedIn. No-one’s over the moon about using it, but until everyone migrates somewhere else, it’s what we’re stuck with.

On the NSA revelations

The Silent Writing Collective is all about the process of writing, not about the word count or subsequently publishing it elsewhere. Still, I wrote almost 2,000 words in an hour and felt what I produced was decent enough to post here (unedited, but with formatting improvements).


Ever since the revelations about the National Security Agency in the US hit a few months ago, I’ve been meaning to write about them. Ostensibly, I should be in a position to give some guidance. I usually know enough, conceptually speaking, about privacy and security to be able to give advice to others.

This time, however, things are different. There’s nothing much you can really do when a large, powerful country like the USA decide to wield its power in an undemocratic way. Not only have they got access to a bewildering array of technological innovations, but they’re doing so in a secret way. Just check out the statement on Lavabit’s front page:

I have been forced to make a difficult decision: to become complicit in crimes against the American people or walk away from nearly ten years of hard work by shutting down Lavabit. After significant soul searching, I have decided to suspend operations. I wish that I could legally share with you the events that led to my decision. I cannot. I feel you deserve to know what’s going on–the first amendment is supposed to guarantee me the freedom to speak out in situations like this. Unfortunately, Congress has passed laws that say otherwise. As things currently stand, I cannot share my experiences over the last six weeks, even though I have twice made the appropriate requests.

Lavabit was the encrypted email service used by NSA whistleblower Edward Snowden. Reading between the lines, it appears that the NSA wanted Lavabit to give them access to at least his email account, if not unfettered access to *everyone’s* account. This mixture of absolute power and secrecy is extremely worrying. Not only does it mean they are beyond the control of ‘the people’ in any jurisdiction, but I’m left wondering what kind of advice it’s even worth giving out.

I try to walk the walk in my technological life. I don’t recommend people use things that I don’t use myself. While others I’ve seen on Twitter, Hacker News and other online spaces have attempted to lock things down, I’ve felt a bit powerless. On the one hand, I too want to lock things down. While there’s no clear and present data of me being locked up for anything, I’m not a big fan of some bored NSA employee being able to find out more about me than even I know about myself.

Absolute power corrupts absolutely. We know that. But the response to the revelations amongst the general public so far seems to be ‘meh’. Some have used the classic response of ‘if you’re doing nothing wrong you’ve got nothing to fear’. This is so wrong-headed it’s unbelievable. We all break laws every day – even though the laws of the UK are finally online. If someone has access and can dig through everything you do then of course they’ll find something incriminating. It’s so close to an Orwellian nightmare it’s untrue.

I already overshare on the Internet, it’s true. But that’s both a tactic and an expression of who I am. I believe in my right to free and openly express who I am – and more importantly, how I want to be seen – to the world at large. The thing that concerns me is that I don’t really know where the NSA’s knowledge of me and my actions starts and stops. Apparently they have the ability to eavesdrop on conversations by firing a laser beam at a plastic cup in the same room as their target. Or even a window. There’s a reason why we put curtains on our windows. The spaces in which we know we’re alone (or alone with significant others) are important for self-development and, dare I say it human flourishing.

So what have I done in response to the NSA revelations? Not much, really. I’ve talked a good game and explored various options. I’ve kept up with the news and various articles linked to from Hacker News and The Guardian. But I haven’t actually done much. Part of that is because I don’t want to take the hit on my productivity – many of the ‘more secure’ replacements aren’t as slick or frictionless – but partly for another reason: I don’t feel like my weaponry against governments should be extreme crypto. I feel that it should be democratic processes. If someone or some organisation is abusing it’s power, then the people should have some recourse against them. Even if it’s a different sovereign country, the people of my country should be able to put pressure on them to do something about it.

Some of the things I’ve considered doing include switching from running Mac OS X on my (or rather, Mozilla’s) MacBook Pro to a variant of Linux. The MacBook the machine I use most of the time. Only rarely – like now, actually, as I’m writing this – will I use a ThinkPad X61 running Chromium OS. I’ve tried to use Linux as my main operating system since 1997 when, as a 16 year-old, I bought a book on Red Hat Linux to try and get my head around it. I kind of know my way around some of the commands, but it greatly frustrates me when updates break really important things such as wireless networking. Macs just work in a way I hadn’t experienced before using them. I suppose this ‘Chromiumbook’ isn’t bad, but I just feel that everything I write is fuelling Google’s ad dollars.

I think there’s nothing much we can do from a technological point of view as individual users versus the might of the NSA. Indeed, it might make matters worse as apparently their default filter for ‘is this person dangerous?’ is ‘if they use encryption, yes’. That, of course, makes them not even worth parodying, but does make me want to throw my hands in the air. Instead, though, what I think it’s important to do is to think about security and privacy more generally. What is it that we want to be secure? Who do we want to protect our privacy from?

I’m only speaking for myself here, but I think it might be more widely applicable:

  • I don’t want to be the victim of identity theft.
  • I want to be able to surf the Web anonymously if what I’m looking at/for could potentially compromise me personally or professionally.
  • While I’ve pretty much given up on email ever being secure, I want other communications to be locked down and visible to others only if at least one of the parties involved wants this to be the case.
  • I want to be able to craft multiple, discrete pseudo-anonymous personas without being forced to reveal the connections between them.

I suppose, overall, I don’t want to be watched or feel that I’m being watched. This might seem odd coming from someone who seemingly tweets and otherwise shares a fair bit of detail from my life. The difference is that it’s under my control. You’re seeing glimpses into my life through the filter or lens that I choose to put on it. That’s autonomy. That’s freedom.

So I am going to make some changes, but I’m not going to go nuts. I’ll keep doing what I can to put pressure on the UK and US governments to do something about the NSA over-reaching. I’ll keep up to date and support organisations like the Electronic Frontier Foundation who campaign on our behalf (their Who’s Got Your Back 2013 is well worth a read). I’m going to see what’s available in terms of other services that may offer more privacy and security. But, instead of automatically jumping ship, I’ll attempt to weigh the productivity cost. If it doesn’t seem to be worth it, then I won’t do it.

For all I’ve written above about how important I see security and privacy, I’ve come to expect that the technological tools I use afford me a certain level of fluency and productivity. My job and professional reputation indirectly (and at times, directly) depend upon this. I suppose there’s a heavily performative notion in there: I have to not only be productive but be seen to be productive (at least in the construct that’s in my head).

Also, it’s important to have at least a connection to ‘the (wo)man on the street’. As soon as you look like, or come across as, a special case then people stop paying attention to you. I’ve experienced some of that because I wrote my doctoral thesis on digital literacies and/or because I now work for Mozilla. “It’s easy for you to say,” people exclaim. Well, it’s not actually. It’s difficult and tortuous and philosophically problematic. I spend far too long thinking about this kind of stuff.

What I think is important is that we build a bridge between those who think the NSA revelations show that western governments somehow have “got our back” and those who, in the words of Marc Scott, have glued a tinfoil hat to their heads. It’s important not to talk past one another on issues like these. After all, these aren’t issues around cryptography or terrorism but around freedom, liberty and the pursuit of happiness, writ large.

The thing that concerns me to the point of lying awake thinking at night is the world that my six year-old son and two year-old daughter will inhabit. My formative years were spent growing up with the Web in its Wild West, frontier town-feel years. Being able to put up a website (in my case, as a sixteen year old, one about Monty Python) and have it accessible anywhere in the world was mind-blowing. But it wasn’t just that. It was the fact that people could connect with one another without boundaries relating to power, geography, class or skin colour.

That’s the Web we’ve lost – it’s well worth reading Anil Dash for more on that. The networked world that my children will inhabit (unless we do something about it) will constrain instead of liberate. It will be something to fear instead of something to embrace. And that greatly saddens me.

So beyond making relevant changes to my own personal setup I suppose I’ve got a responsibility to educate those around me. First, I need to scare them into taking privacy and security seriously. But then, second, I need to show them what appropriate steps look like to protect that. And if, as in the case of the NSA, appropriate steps on a personal level aren’t enough, then I need to encourage them to take appropriate (collective) political action.

I hope this goes some way to explaining why I haven’t got a 10 step guide on what to do to change your hardware/software setup to be NSA-proof. You can’t be. But you, we together can agitate for a better world. That’s not to say we should be complacent about our technological setups. Not at all. Now, more than ever, is a great time to review the information and details that may be unintentionally leaking out to the wider world without your knowledge.

In conclusion, then, I’ll not be breaking out my tinfoil hat anytime soon. And I’ll not be locking down my machines to a ridiculous extent. I’ll be trying out new operating systems, software and even hardware, but still want to be able to use someone else’s machine without huge amounts of hassle. And I need, especially for work reasons, to be able to communicate with others without being some kind of ‘special case’ that other people have to tolerate or, more likely, avoid.

Image CC BY-NC-SA Truthout.org

I am not Richard Stallman

Introduction

Yesterday I headed to Lifehacker to get my weekly dose of their excellent ‘How I Work’ series. However, this week they decided to hand it over to readers using their blogging platform (Kinja). I decided to take part and you can see my response here (warning: includes photo of my messy study!)

Marc Scott picked up on this via Twitter and wrote a masterful post entitled How I Do My Computing by !=Richard Stallman. A sample:

The Internet on my laptop runs really slowly and it’s quite difficult to see sites because of all the toolbars that take up half of the screen. Also when I load the Internet I get annoyed by all the pop-ups that suddenly appear for adult dating sites and on-line gambling. I used to get lots of annoying messages on the Internet about things like ActiveX, but a friend showed me how to change my security settings so they don’t come any more.

Class.

I am not Richard Stallman

At the end of Marc’s post he linked to original post by Stallman (of which his was a parody).

Wow. Stallman is hardcore:

I occasionally use X11 for tasks that need graphics, but mostly I use a text console. I find that the text console is more efficient and convenient for the bulk of the work I do, which is editing text.

and:

I generally do not connect to web sites from my own machine, aside from a few sites I have some special relationship with. I fetch web pages from other sites by sending mail to a program (see git://git.gnu.org/womb/hacks.git) that fetches them, much like wget, and then mails them back to me. Then I look at them using a web browser, unless it is easy to see the text in the HTML page directly. I usually try lynx first, then a graphical browser if the page needs it.

That’s as close to tinfoil hat-wearing as it actually gets.

The Moral

As Seth Godin often says, we need to surround ourself (intellectually, if we can’t physically) with outliers in order to challenge our thinking:

The crowd has more influence on us than we have on the crowd. It’s not an accident that breakthroughs in music, architecture, software, athletics, fashion and cuisine come in bunches, often geographic. If you need to move, move. At least change how and where you exchange your electrons and your ideas.

After all, as they say, bad habits are like a comfortable bed: easy to get into but hard to get out of.

There’s a political theory called the Overton window that is used to describe the narrow range of ideas that the public will accept. The degrees of acceptance goes like this:

Overton assigned a spectrum of “more free” and “less free”, with regard to government intervention, oriented vertically on an axis. When the window moves or expands along this axis, an idea at a given location may become more or less politically acceptable as the window moves relative to it. The degrees of acceptance[4] of public ideas can be described roughly as:

  • Unthinkable
  • Radical
  • Acceptable
  • Sensible
  • Popular
  • Policy

So at the start of the year, before the NSA revelations, it would be Unthinkable for an ‘ordinary’ person to adopt anything close to  Stallman’s approach. Now, however, it’s at least Radical if not Acceptable or Sensible.

Conclusion

I’m not suggesting that we crypto everything or become paranoid to the extent that it consumes us. What I am suggesting (and what I’m doing myself) is to review the connected technologies and services I’m using. If you want to do something similar then I highly recommend you check out the Electronic Frontier Foundation’s Who Has Your Back? 2013 and, if you’ve never used Linux before, give elementaryOS a spin.* It’ll probably be an upgrade from what you’re using.

Questions? Comments? I want to read them. Add yours below!

Image CC BY-NC Maurizio Scorianz


*Want to go a step further? Try Tails.

Why I’m using iPREDator now the Digital Economy Bill has been passed

Introduction

We just love our unelected leaders in the UK. Not only did Gordon Brown get to become Prime Minister without being elected to the position, but Peter (now ‘Lord’) Mandelson has his fingers in more pies of government behind the scenes that I think most people realise. I always think of Gríma Wormtongue from Lord of the Rings when I see him.

And now, of course, Mandelson is ‘First Secretary of State’, an honorific title all but making him Deputy Prime Minister. Oh, and he’s also Secretary of State for Business, Innovation and Skills as well as President of the Board of Trade. It’s a complete coincidence, of course, that his interest in the Digital Britain agenda (and ‘protecting’ intellectual property rights) was piqued after being wined and dined by David Geffen, co-founder of the Dreamworks studio with Steven Spielberg.

The Digital Economy Act

You would have thought that after all the scandal about MP’s expenses that Parliament would have cleaned itself up. Unfortunately the closest they get to this is a process called ‘wash-up’. Unfortunately, as Martin Bell writes in the Guardian:

This unfortunately has nothing to do with cleansing parliament from its many stains of corruption – more necessary now than ever. It is the term used to describe the negotiations between the parties to decide which bills will survive at the end of the parliamentary session and which will not. It is a secretive process, the modern equivalent of the smoke-filled room. Those taking part are the parties’ whips and business managers, plus officials from various government departments. Those excluded are the rank and file of MPs, together with independents and crossbenchers in the Lords. The wash-up is a stitch-up devised by and for the main political parties.

Whilst you can read the Digital Economy Bill (and subsequent Act) online, it’s best summarized in articles like this one. The bits that really irritate me?

  • Government powers to cut off internet connections of those suspected in illegal file-sharing activities.
  • More government control over who can register .uk domain names and for what purposes.

As many commentators have pointed out, once the heavy hand of the State is upon you, the burden of proof will rest with you to prove that you haven’t been engaging in illegal activities. Proving that you haven’t done something is obviously a lot harder than you have.

iPREDator

Fortunately, there’s others who think like me. Not least the people behind both The Pirate Bay and the Swedish Pirate Party who have come up with iPREDator (named, ironically, after the PRED legislation in Sweden). It gives users a way of staying anonymous online.

How does it work? Via VPN (Virtual Private Network). Basically, they provide a tunnel through the internet and a proxy server through which to access everything online. You route your internet traffic through this and they guarantee not to spill the beans.

Why do I feel the need to cover my tracks? I’m not a massive user of Bittorrent and I’m certainly not engaged in any terrorist activities. But I do object to the State spying on me and potentially accusing me of stuff to shut down my internet connection. So I’m protecting myself.

How about you?

css.php