Open Thinkering

Menu

Tag: security

More on the mechanics of GDPR

Note: I’m writing this post on my personal blog as I’m still learning about GDPR. This is me thinking out loud, rather than making official Moodle pronouncements.


‘Enjoyment’ and ‘compliance-focused courses’ are rarely uttered in the same breath. I have, however, enjoyed my second week of learning from Futurelearn’s course on Understanding the General Data Protection Regulation. This post summarises some of my learning and builds upon my previous post.

This week, the focus was on the rights of data subjects, and started with a discussion about the ‘modalities’ by which communication between the data controller and processor, and the data subject take place:

By modalities, we mean different mechanisms that are used to facilitate the exercise of data subjects’ rights under the GDPR, such as those relating to different forms of information provision (in writing, spoken, electronically) and other actions to be taken when data subjects invoke their rights.

Although the videos could be improved (I just use the transcripts) the mix of real-world examples, quizzes, and reflection is great and suits the way I learn best.

I discovered that the GDPR not only makes provision for what should be communicated by data controllers but how this should be done:

In the first place, measures must be taken by data controllers to provide any information or any communication relating to the processing to these individuals in a concise, transparent, intelligible and easily accessible form, using the language that is clear and plain. For instance, it should be done when personal data are collected from data subjects or when the latter exercise their rights, such as the right of access. This requirement of transparent information and communication is especially important when children are data subjects.

Moreover, unless the data subject is somehow attempting to abuse the GDPR’s provisions, the data controller must provide the requested information free of charge.

The number of times my surname is spelled incorrectly (often ‘Bellshaw’) or companies have other details incorrect, is astounding. It’s good to know, therefore, that the GDPR focuses on rectification of individuals’ personal data:

In addition, the GDPR contains another essential right that cannot be disregarded. This is the right to rectification. If controllers store personal data of individuals, the latter are further entitled to the right to rectify, without any undue delay, inaccurate information concerning them. Considering the purpose of the processing, any data subject has the right to have his or her personal data completed such as, for instance, by providing a supplementary statement.

So far, I’ve focused on me as a user of technologies — and, indeed, the course uses Google’s services as an example. However, as lead for Project MoodleNet, the reason I’m doing this course is as the representative of Moodle, an organisation that would be both data controller and processor.

There are specific things that must be built into any system that collects personal data:

At the time of the first communication with data subjects, the existence of the right to object– as addressed earlier– must be indicated to data subjects in a clear manner and separately from other information. This right can be exercised by data subjects when we deal with the use of information society services by automated means using technical specifications. Importantly, the right to object also exists when individuals’ personal data are processed for scientific or historical research or statistical purposes. This is, however, not the case if the processing is carried out for reasons of public interest.

Project MoodleNet will be a valuable service, but not from a scientific, historical, or statistical point of view. Nor will the data processing be carrierd out for reasons of public interest. As such, the ‘right to object’ should be set out clearly when users sign up for the service.

In addition, users need to be able to move their data out of the service and erase what was previously there:

The right to erasure is sometimes known as the right to be forgotten, though this denomination is not entirely correct. Data subjects have the right to obtain from data controllers the erasure of personal data concerning them without undue delay.

I’m not entirely clear what ‘undue delay’ means in practice, but when building systems, we should build it with these things in mind. Being able to add, modify, and delete information is a key part of a social network. I wonder what happens when blockchain is involved, given it’s immutable?

The thing that concerns most organisations when it comes to GDPR is Article 79, which states that data subjects have legal recourse if they’re not happy with the response they receive:

Furthermore, we should mention the right to an effective judicial remedy against a controller or processor laid down in Article 79. It allows data subjects to initiate proceedings against data controllers or processors before a court of the Member State of the establishment of controllers or processors or in the Member State where they have their habitual residence unless controllers or processors are public authorities of the Member States and exercise their public powers. Thus, data subjects can directly complain before a judicial institution against controllers and processors, such as Google or others.

I’m particularly interested in what effect data subjects having the right “not to be subjected to automated individual decision-making” will have. I can’t help but think that (as Google has already started to do through granular opt-in questions) organisations will find ways to make users feel like it’s in their best interests. They already do that with ‘personalised advertising’.

There’s a certain amount of automation that can be useful, the standard example being Amazon’s recommendations system. However, I think the GDPR focuses more on things like decisions about whether or not to give you insurance based on your social media profile:

There are three additional rights of data subjects laid down in the General Data Protection Regulation, and we will cover them here. These rights are – the right not to be subjected to automated individual decision-making, the right to be represented by organisations and others, and the right to compensation. Given that we live in a technologically advanced society, many decisions can be taken by the systems in an automatic manner. The GDPR grants to all of us a right not to be subjected to a decision that is based only on an automated processing, which includes profiling. This decision must significantly affect an individual, for example, by creating certain legal effects.

Thankfully, when it comes to challenging organisations on the provisions of the GDPR, data subjects can delegate their representation to a non-profit organisation. This is a sensible step, and prevents lawyers become rich from GDPR challenges. Otherwise, I can imagine data sovereignty becoming the next personal injury industry.

If an individual feels that he or she can better give away his or her representation to somebody else, this individual has the right to contact a not-for-profit association– such as European Digital Rights – in order to be represented by it in filing complaints, exercising some of his or her rights, and receiving compensation. This might be useful if an action is to be taken against such a tech giant as Google or any other person or entity. Finally, persons who have suffered material or non-material damage as a result of an infringement of the GDPR have the right to receive compensation from the controller or processor in question.

Finally, and given that the GDPR applies not only across European countries, but to any organisation that processes EU citizen data, the following is interesting:

The European Union and its Member States cannot simply impose restrictions addressed in Article 23 GDPR when they wish to. These restrictions must respect the essence of the fundamental rights and freedoms and be in line with the requirements of the EU Charter of Fundamental Rights and the European Convention for the Protection of Human Rights and Fundamental Freedoms. In addition, they are required to constitute necessary and proportionate measures in a democratic society meaning that there must be a pressing social need to adopt these legal instruments and that they must be proportionate to the pursued legitimate aim. Also, they must be aiming to safeguard certain important interests. So, laws adopted by the EU of its Members States that seek to restrict the scope of data subjects’ rights are required to be necessary and proportionate and must protect various interests discussed below.

I learned a lot this week which will stand me in good stead as we design Project MoodleNet. I’m looking forward to putting all this into practice!


Image by Erol Ahmed available under a CC0 license

Why I’ve just ditched my cloud-based password manager

TL;DR: I’ve ditched LastPass in favour of LessPass. The former stores your passwords in the cloud and requires a master password. The latter uses ‘deterministic password generation’ to keep things on your own devices.


Although I’ve used LastPass for the past six years, I’ve never been completely happy with it. There have been breaches, and a couple of years it was acquired by LogMeIn, a company not exactly revered in terms of trust and customer service. Their ’emergency break-in’ feature makes me feel that my passwords are just one serious hack or government request away.

I read Hacker News on pretty much a daily basis and I’m particularly interested in the underlying approaches to technology that change over time. There are certain assumptions and habits of mind that come to be questioned which lead to different, usually better, solutions to certain problems. Today, the issue of cloud-based password managers was again on the front page.

From the linked article:

When passwords are stored, they must be encrypted and then retrieved later when needed. Storage, of any type, is a burden. Users are required to backup stored passwords and synchronize them across devices and implement measures to protect the stored passwords or at least log access to the stored passwords for audit purposes. Unless backups occur regularly, if the encrypted password file becomes corrupt or is deleted, then all the passwords are lost.

Users must also devise a “master password” to retrieve the encrypted passwords stored by the password management software. This “master password” is a weak point. If the “master password” is exposed, or there is a slight possibility of potential exposure, confidence in the passwords are lost.

Also:

I believe that password management should only occur locally on end use devices, not on remote systems and not in the client web browser.

Remote systems are outside the user’s control and thus cannot be trusted with password management. These systems may not be available when needed and may not be storing or transmitting passwords correctly. Externally, the systems may seem correct (https, etc.) but behind the scenes, no one really knows what’s going on, how the passwords are being transmitted, generated, stored, or who has access to them.

It’s pretty difficult to argue against these two points. Having felt uneasy for a while, I knew it was time to do something different. It was time to ditch LastPass.

I looked at a couple of different solutions: the one proposed by the author of the above quotations (too complex to set up), as well as one which looked promising, but now seems to be unsupported. In the end, I decided upon LessPass, which has been recommended to me by a few people this year.

How is LessPass different from LastPass? This gif from their explanatory blog post is helpful:

lesspass

All of this happens in the browser, without your data being transmitted anywhere else.

Basically, you enter the following:

  1. Name of the site or thing for which you need a password
  2. Your username
  3. A secret passphrase

…and, from these three pieces of information, LessPass generates a password that you can then copy using complex algorithms and entropy stuff that I don’t understand.

lesspass-explainer

The fact that I don’t understand it is fine, because there are people who do, and the code is Open Source. It can be inspected for bugs and vulnerabilities — unlike the proprietary solution provided by LastPass.

The options button to the bottom-right of the LessPass window gives the user advanced options such as:

  • Length of password
  • Types of character to include in the password
  • Increment number (if you’re forced to rotate passwords regularly)

My favourite LessPass feature, though, solves a nagging problem I’ve had for ages. If you have a long passphrase, then sometimes it can be very easy to mistype it. You don’t want to reveal your obfuscated passphrase to the world, so how can you be sure that you’ve typed it correctly?

lesspass-emoji

Simple! LessPass adds an emoji triplet to the right of the secret passphrase box. You’ll notice that changes as you type and, when you finish, it should always look the same. If it doesn’t, then you’ve mistyped your passphrase.

I’ll be making the transition from LastPass to LessPass over the next few weeks. It’s not as simple as just exporting from one database into another, as the whole point of doing this is that there is no one place that someone can hoover up my passwords.

So my plan of action is:

  1. Every time I use a service, create a new password using LessPass.
  2. Delete existing password in LastPass.
  3. Rinse and repeat until most of my passwords are generated via LessPass.
  4. Delete my LastPass account.
  5. Celebrate my higher levels of personal security.

Questions? Ask away in the comments section!


Photo: Crypt by Christian Ditaputratama under a CC BY-SA license

Some thoughts on Keybase, online security, and verification of identity

I’m going to stick my neck out a bit and say that, online, identity is the most important factor in any conversation or transaction. That’s not to say I’m a believer in tying these things to real-world, offline identities. Not at all.

Trust models change when verification is involved. For example, if I show up at your door claiming to be Doug Belshaw, how can I prove that’s the case? The easiest thing to do would be to use government-issued identification such as my passport or driving license. But what if I haven’t got any, or I’m unwilling to use it? (see the use case for CheapID) In those kinds of scenarios, you’re looking for multiple, lower-bar verification touchstones.

As human beings, we do this all of the time. When we meet someone new, we look for points of overlapping interest, often based around human relationships. This helps situate the ‘other’ in terms of our networks, and people can inherit trust based on existing relationships and interactions.

Online, it’s different. Sometimes we want to be anonymous, or at least pseudo-anonymous. There’s no reason, for example, why someone should be able to track all of my purchases just because I’m participating in a digital transaction. Hence Bitcoin and other cryptocurrencies.

When it comes to communication, we’ve got encrypted messengers, the best of which is widely regarded to be Signal from Open Whisper Systems. For years, we’ve tried (and failed) to use PGP/GPG to encrypt and verify email transactions, meaning that trusted interactions are increasingly taking place in locations other than your inbox.

On the one hand, we’ve got purist techies who constantly question whether a security/identity approach is the best way forward, while on the other end of the spectrum there’s people using the same password (without two-factor authentication) for every app or service. Sometimes, you need a pragmatic solution.

keybase

I remember being convinced to sign up for Keybase.io when it launched thanks to this Hacker News thread, and particularly this comment from sgentle:

Keybase asks: who are you on the internet if not the sum of your public identities? The fact that those identities all make a certain claim is a proof of trust. In fact, for someone who knows me only online, it’s likely the best kind of trust possible. If you meet me in person and I say “I’m sgentle”, that’s a weaker proof than if I post a comment from this account. Ratchet that up to include my Twitter, Facebook, GitHub, personal website and so forth, and you’re looking at a pretty solid claim.

And if you’re thinking “but A Scary Adversary could compromise all those services and Keybase itself”, consider that an adversary with that much power would also probably have the resources to compromise highly-connected nodes in the web of trust, compromise PKS servers, and falsify real-world identity documents.

I think absolutism in security is counterproductive. Keybase is definitionally less secure than, say, meeting in person and checking that the person has access to all the accounts you expect, which is itself less secure than all of the above and using several forms of biometric identification to rule out what is known as the Face/Off attack.

The fight isn’t “people use Keybase” vs “people go to key-signing parties”, the fight is “people use Keybase” vs “fuck it crypto is too hard”. Those who need the level of security provided by in-person key exchanges still have that option available to them. In fact, it would be nice to see PKS as one of the identity proof backends. But for practical purposes, anything that raises the crypto floor is going to do a lot more good than dickering with the ceiling.

Since the Trump inauguration, I’ve seen more notifications that people are using Keybase. My profile is here: https://keybase.io/dajbelshaw. Recently, cross-platform apps for desktop and mobile devices have been added, mearning not only can you verify your identity across the web, but you can chat and share files securely.

It’s a great solution. The only word of warning I’d give is don’t upload your private key. If you don’t know how public and private keys work, then please read this article. You should never share your private key with anyone. Keep it to yourself, even if Keybase claim it will make your life easier.

To my mind, all of this fits into my wider work around Open Badges. Showing who you are and what you can do on the web is a multi-faceted affair, and I like the fact that I can choose to verify who I am. What I opt to keep separate from this profile (e.g. my gamertag, other identities) is entirely my choice. But verification of identity on the internet is kind of a big deal. We should all spend longer thinking about it, I reckon.

Main image: Blondinrikard Fröberg

css.php