Category: Technology (page 1 of 35)
Note: I’m writing this post on my personal blog as I’m still learning about GDPR. This is me thinking out loud, rather than making official Moodle pronouncements.
‘Enjoyment’ and ‘compliance-focused courses’ are rarely uttered in the same breath. I have, however, enjoyed my second week of learning from Futurelearn’s course on Understanding the General Data Protection Regulation. This post summarises some of my learning and builds upon my previous post.
This week, the focus was on the rights of data subjects, and started with a discussion about the ‘modalities’ by which communication between the data controller and processor, and the data subject take place:
By modalities, we mean different mechanisms that are used to facilitate the exercise of data subjects’ rights under the GDPR, such as those relating to different forms of information provision (in writing, spoken, electronically) and other actions to be taken when data subjects invoke their rights.
Although the videos could be improved (I just use the transcripts) the mix of real-world examples, quizzes, and reflection is great and suits the way I learn best.
I discovered that the GDPR not only makes provision for what should be communicated by data controllers but how this should be done:
In the first place, measures must be taken by data controllers to provide any information or any communication relating to the processing to these individuals in a concise, transparent, intelligible and easily accessible form, using the language that is clear and plain. For instance, it should be done when personal data are collected from data subjects or when the latter exercise their rights, such as the right of access. This requirement of transparent information and communication is especially important when children are data subjects.
Moreover, unless the data subject is somehow attempting to abuse the GDPR’s provisions, the data controller must provide the requested information free of charge.
The number of times my surname is spelled incorrectly (often ‘Bellshaw’) or companies have other details incorrect, is astounding. It’s good to know, therefore, that the GDPR focuses on rectification of individuals’ personal data:
In addition, the GDPR contains another essential right that cannot be disregarded. This is the right to rectification. If controllers store personal data of individuals, the latter are further entitled to the right to rectify, without any undue delay, inaccurate information concerning them. Considering the purpose of the processing, any data subject has the right to have his or her personal data completed such as, for instance, by providing a supplementary statement.
So far, I’ve focused on me as a user of technologies — and, indeed, the course uses Google’s services as an example. However, as lead for Project MoodleNet, the reason I’m doing this course is as the representative of Moodle, an organisation that would be both data controller and processor.
There are specific things that must be built into any system that collects personal data:
At the time of the first communication with data subjects, the existence of the right to object– as addressed earlier– must be indicated to data subjects in a clear manner and separately from other information. This right can be exercised by data subjects when we deal with the use of information society services by automated means using technical specifications. Importantly, the right to object also exists when individuals’ personal data are processed for scientific or historical research or statistical purposes. This is, however, not the case if the processing is carried out for reasons of public interest.
Project MoodleNet will be a valuable service, but not from a scientific, historical, or statistical point of view. Nor will the data processing be carrierd out for reasons of public interest. As such, the ‘right to object’ should be set out clearly when users sign up for the service.
In addition, users need to be able to move their data out of the service and erase what was previously there:
The right to erasure is sometimes known as the right to be forgotten, though this denomination is not entirely correct. Data subjects have the right to obtain from data controllers the erasure of personal data concerning them without undue delay.
I’m not entirely clear what ‘undue delay’ means in practice, but when building systems, we should build it with these things in mind. Being able to add, modify, and delete information is a key part of a social network. I wonder what happens when blockchain is involved, given it’s immutable?
The thing that concerns most organisations when it comes to GDPR is Article 79, which states that data subjects have legal recourse if they’re not happy with the response they receive:
Furthermore, we should mention the right to an effective judicial remedy against a controller or processor laid down in Article 79. It allows data subjects to initiate proceedings against data controllers or processors before a court of the Member State of the establishment of controllers or processors or in the Member State where they have their habitual residence unless controllers or processors are public authorities of the Member States and exercise their public powers. Thus, data subjects can directly complain before a judicial institution against controllers and processors, such as Google or others.
I’m particularly interested in what effect data subjects having the right “not to be subjected to automated individual decision-making” will have. I can’t help but think that (as Google has already started to do through granular opt-in questions) organisations will find ways to make users feel like it’s in their best interests. They already do that with ‘personalised advertising’.
There’s a certain amount of automation that can be useful, the standard example being Amazon’s recommendations system. However, I think the GDPR focuses more on things like decisions about whether or not to give you insurance based on your social media profile:
There are three additional rights of data subjects laid down in the General Data Protection Regulation, and we will cover them here. These rights are – the right not to be subjected to automated individual decision-making, the right to be represented by organisations and others, and the right to compensation. Given that we live in a technologically advanced society, many decisions can be taken by the systems in an automatic manner. The GDPR grants to all of us a right not to be subjected to a decision that is based only on an automated processing, which includes profiling. This decision must significantly affect an individual, for example, by creating certain legal effects.
Thankfully, when it comes to challenging organisations on the provisions of the GDPR, data subjects can delegate their representation to a non-profit organisation. This is a sensible step, and prevents lawyers become rich from GDPR challenges. Otherwise, I can imagine data sovereignty becoming the next personal injury industry.
If an individual feels that he or she can better give away his or her representation to somebody else, this individual has the right to contact a not-for-profit association– such as European Digital Rights – in order to be represented by it in filing complaints, exercising some of his or her rights, and receiving compensation. This might be useful if an action is to be taken against such a tech giant as Google or any other person or entity. Finally, persons who have suffered material or non-material damage as a result of an infringement of the GDPR have the right to receive compensation from the controller or processor in question.
Finally, and given that the GDPR applies not only across European countries, but to any organisation that processes EU citizen data, the following is interesting:
The European Union and its Member States cannot simply impose restrictions addressed in Article 23 GDPR when they wish to. These restrictions must respect the essence of the fundamental rights and freedoms and be in line with the requirements of the EU Charter of Fundamental Rights and the European Convention for the Protection of Human Rights and Fundamental Freedoms. In addition, they are required to constitute necessary and proportionate measures in a democratic society meaning that there must be a pressing social need to adopt these legal instruments and that they must be proportionate to the pursued legitimate aim. Also, they must be aiming to safeguard certain important interests. So, laws adopted by the EU of its Members States that seek to restrict the scope of data subjects’ rights are required to be necessary and proportionate and must protect various interests discussed below.
I learned a lot this week which will stand me in good stead as we design Project MoodleNet. I’m looking forward to putting all this into practice!
This week, I spent Monday evening to Wednesday evening at Wortley Hall, near Sheffield, England. It’s a stately home run by a worker-owned co-op and I was there with my We Are Open colleagues for the second annual Co-operative Technologists (CoTech) gathering. CoTech is a network of UK-based co-operatives who are focused on tech and digital.
Last year, at the first CoTech gathering, we were represented by John Bevan — who was actually instrumental in getting the network off the ground. This time around, not only did all four members of We Are Open attend, but one of us (Laura Hilliger) actually helped facilitate the event.
I wasn’t too sure what to expect, but I was delighted by the willingness of the 60+ people present to get straight into finding ways we can all work together. We made real progress over the couple of days I was there, and I was a little sad that other commitments meant I couldn’t stay until the bitter end on Thursday lunchtime.
We self-organised into groups, and the things I focused on were introducing Nextcloud as a gap in the CoTech shared services landscape, and helping define processes for using the various tools we have access to. Among the many other things that people collaborated on were sales and marketing, potentially hiring our first CoTech member of staff, games that could help people realise that they might be better working for a co-op, defining a constitution, and capturing the co-operative journeys that people have been on.
There was a lot of can-do attitude and talent in the room, coupled with a real sense that we’re doing important work that can help change the world. There’s a long history of co-operation that we’re building upon, and the surroundings of Wortley Hall certainly inspired us in our work! Our co-op will definitely be back next year, and I’m sure most of us will meet at CoTech network events again before then.
The CoTech wiki is available here. As with all of these kinds of events, we had a few problems with the wifi which means that, at the time of publishing this post, not everything has been uploaded to the wiki. It will appear there in due course.
Although there are member-only spaces (and benefits), anyone – whether currently a member of a worker-owned co-op or not – is also welcome to join the CoTech community discussion forum.
One of the great things about the internet, and one of the things I think we’re losing is the ability to experiment. I like to experiment with my technologies, my identity, and my belief systems. This flies in the face of services like Facebook that insist on a single ‘real’ identity while slowly deskill their users.
I’ve been messing about with ZeroNet, which is something I’ve mentioned before, and which gets close to something I’ve wanted now for quite some time: an ‘untakedownable’ website. Whether it’s DDoS attacks, DNS censorship, or malicious code injection, I want a platform that, no matter what I choose to say, will stay up.
To access sites via ZeroNet, you have to be running the ZeroNet service. By default, you view a clone of the site you want to visit on your own machine, accessed in the web browser. That means it’s fast. When the site creator updates the site/blog/wiki/whatever, that’s then sent to peers to distribute. It’s all lightning-quick, and built on Bittorrent technlogy and Bitcoin cryptography.
The trouble, of course, comes when someone who isn’t yet running ZeroNet wants to visit a site. Thankfully, there’s a way around that using a ‘proxy’ or bridge. This is ZeroNet running on a public server for everyone to use. There’s several of these, but I’ve set up my own using this guide.
I encourage you to download and experiment with ZeroNet but, even if you don’t, please check out my new blog. You can access it via uncensored.dougbelshaw.com or bit.ly/doug-uncensored — the rather long and unwieldy actual IP address of the server running the public-facing copy is 18.104.22.168/1PsNi4TAkn6vtKA6n1Se9y7gmVjF4GU3uF.
Finally, if you’re thinking, “What is this?! It’ll never catch on…” then I’d like to remind you about technologies that people didn’t ‘get’ at first (e.g. Twitter in 2007) as well as that famous Wayne Gretszky quotation, “I skate to where the puck is going to be, not where it has been”.
As I write this, I’m in an apartment in Barcelona, after speaking and running a workshop at an event.
On Sunday, there was a vote for Catalonian independence. It went ahead due to the determination of teachers (who kept schools open as voting centres), the bravery of firemen and Catalan police (who resisted Spanish police), and… technology.
As I mentioned in the first section of my presentation on Wednesday, I’m no expert on Spanish politics, but I am very interested in the Catalonian referendum from a technological point of view. Not only did the Spanish government take a heavy-handed approach by sending in masked police to remove ballot boxes, but they applied this to the digital domain, raiding internet service providers, blocking websites, and seizing control of referendum-related websites.
Yet, people still accessed websites that helped them vote. In fact, around 42% managed to do so, despite all of the problems and potential danger in doing so.
By way of contrast, no more than 43% of the population has ever voted in a US Presidential election (see comments section). There have been claims of voting irregularities (which can be expected when Spanish police were using batons and rubber bullets), but of those who voted, 90% voted in favour of independence.
People managed to find out the information they required through word of mouth and via websites that were censorship-resistant. The technologists responsible for keeping the websites up despite interference from Madrid used IPFS, which stands for Inter Planetary File System. IPFS is a decentralised system which manages to remove the reliance on a single point of failure (or censorship) while simultaneously solving problems around inefficiencies caused by unecessary file duplication.
The problem with IPFS, despite its success in this situation is that it’s mainly used via the command line. As much as I’d like everyone to have some skills around using terminal windows, realistically that isn’t likely to happen anytime soon in a world of Instagram and Candy Crush.
Instead, I’ve been spending time investigating ZeroNet, which is specifically positioned as providing “open, free and uncensorable websites, using bitcoin cryptography and bitorrent network”. Instead of there being ‘gateways’ through which you can access ZeroNet sites through the open web, you have to install it and then run it locally in a web browser. It’s a lot easier than it sounds, and the cross-platform functionality has an extremely good-looking user interface.
I’ve created a ‘Doug, uncensored’ blog using ZeroNet. This can be accessed via anyone who is running the service and knows the (long) address. When you access the site you’re accessing it on your own machine and then serving it up to — just like with bittorrent. It’s the realisation of the People’s Cloud idea that Vinay Gupta came up with back in 2013. The great thing about that is the websites work even when you’re offline, and sync when you re-connect.
As with constant exhortations for people to be more careful about their privacy and security, so decentralised technologies might seem ‘unnecessary’ by most people when everything is going fine. However, just as we put curtains on our windows and locks on our doors, and sign contracts ‘just in case’ something goes wrong, so I think decentralised technologies should be our default.
Why do we accept increased centralisation and surveillance as the price of being part of the digital society? Why don’t we take back control?
Again, as I mentioned in my presentation on Wednesday, we look backwards too much when we’re talking about digital skills, competencies, and literacies. Instead, let’s look forward and ensure that the next generation of technologies don’t sell us down the river for advertising dollars.
Image CC BY-NC-ND Adolfo Luhan
Back in November 2007, Martin Weller, a Professor at the Open University wrote that, in his opinion, the VLE/LMS is dead – “but we’ll probably take five years to realise it”. It’s been almost a decade since his post, and there has been plenty more written about the LMS. In fact, Google returns almost 20,000 results for the search term “LMS is dead”, and just recently Jim Groom wrote a widely-shared and commented-upon post about it.
Yet, it seems, the truth is that the LMS is not going away anytime soon. Why is that? Why have the alternative solutions mentioned in Martin’s post withered and died while the LMS lives on? Why would anyone in 2017 use an LMS? Curiously, the answers are right there in the post from 10 years ago:
Meanwhile, the reasons Martin gives in that post for moving away from an LMS have largely been negated by developments over the last ten years. Here’s his original list of the benefits of using a ‘small pieces, loosely joined’ approach instead of an LMS:
- Better quality tools
- Modern look and feel
- Appropriate tools
- Avoids software sedimentation
- Disintermediation happens
Back when he wrote this post, I would have agreed with all of Martin’s points, envisioning a future filled with users merrily skipping between platforms into the sunset. I’ve learned a lot since then, and it’s pretty clear that a ‘small pieces, loosely joined’ is unlikely to ever happen. The LMS market is growing, not shrinking.
My reason for thinking about all this is because I’ve just started doing some work with Totara, an organisation I first came across back in 2012 when they built the Open Badges functionality for Moodle. Since then, while their code remains open source, they’ve ‘forked’ from the Moodle codebase. They’ve also got Totara Social, an ‘enterprise social network’ platform.
Interestingly, Totara are in the process of removing ‘LMS’ from their branding. That doesn’t mean that the concept of the learning management system is dead. No. What’s happening here is that the term ‘LMS’ has become a ‘dead metaphor’. It no longer does any useful work.
To quote myself, elsewhere:
The problem is that people will, either purposely or naïvely, use human-invented terms in ‘incorrect’ ways. This can lead to exciting new avenues, but it also spells the eventual death of the original term as it loses all explanatory power. A dead metaphor, as Richard Rorty says, is only good as the ‘coral reef’ on which to build other terms.
A learning management system, in essence, is a digital space to support learning. It doesn’t particularly matter what you call it so long as it:
- Has the functionality you require
- Costs what you can afford
- Is reliable
The reason I’ve accepted this piece of work with Totara is because they tick all of my boxes around their approach to this space. They’re innovative. They’re open source. They’ve got a sustainable business model. I’m looking forward to helping them with developing a workable vision and strategy around their community that fits with their pretty unique partner network approach.
As regular readers will be aware, and as betrayed by the introduction to this post, my background is in formal and informal learning. The Learning & Development (L&D) space is relatively new to me, so if you’ve got tips on people to follow, places to hang out, and things to read, please do let me know!
TL;DR: I’ve ditched LastPass in favour of LessPass. The former stores your passwords in the cloud and requires a master password. The latter uses ‘deterministic password generation’ to keep things on your own devices.
Although I’ve used LastPass for the past six years, I’ve never been completely happy with it. There have been breaches, and a couple of years it was acquired by LogMeIn, a company not exactly revered in terms of trust and customer service. Their ’emergency break-in’ feature makes me feel that my passwords are just one serious hack or government request away.
I read Hacker News on pretty much a daily basis and I’m particularly interested in the underlying approaches to technology that change over time. There are certain assumptions and habits of mind that come to be questioned which lead to different, usually better, solutions to certain problems. Today, the issue of cloud-based password managers was again on the front page.
From the linked article:
When passwords are stored, they must be encrypted and then retrieved later when needed. Storage, of any type, is a burden. Users are required to backup stored passwords and synchronize them across devices and implement measures to protect the stored passwords or at least log access to the stored passwords for audit purposes. Unless backups occur regularly, if the encrypted password file becomes corrupt or is deleted, then all the passwords are lost.
Users must also devise a “master password” to retrieve the encrypted passwords stored by the password management software. This “master password” is a weak point. If the “master password” is exposed, or there is a slight possibility of potential exposure, confidence in the passwords are lost.
I believe that password management should only occur locally on end use devices, not on remote systems and not in the client web browser.
Remote systems are outside the user’s control and thus cannot be trusted with password management. These systems may not be available when needed and may not be storing or transmitting passwords correctly. Externally, the systems may seem correct (https, etc.) but behind the scenes, no one really knows what’s going on, how the passwords are being transmitted, generated, stored, or who has access to them.
It’s pretty difficult to argue against these two points. Having felt uneasy for a while, I knew it was time to do something different. It was time to ditch LastPass.
I looked at a couple of different solutions: the one proposed by the author of the above quotations (too complex to set up), as well as one which looked promising, but now seems to be unsupported. In the end, I decided upon LessPass, which has been recommended to me by a few people this year.
How is LessPass different from LastPass? This gif from their explanatory blog post is helpful:
All of this happens in the browser, without your data being transmitted anywhere else.
Basically, you enter the following:
- Name of the site or thing for which you need a password
- Your username
- A secret passphrase
…and, from these three pieces of information, LessPass generates a password that you can then copy using complex algorithms and entropy stuff that I don’t understand.
The fact that I don’t understand it is fine, because there are people who do, and the code is Open Source. It can be inspected for bugs and vulnerabilities — unlike the proprietary solution provided by LastPass.
The options button to the bottom-right of the LessPass window gives the user advanced options such as:
- Length of password
- Types of character to include in the password
- Increment number (if you’re forced to rotate passwords regularly)
My favourite LessPass feature, though, solves a nagging problem I’ve had for ages. If you have a long passphrase, then sometimes it can be very easy to mistype it. You don’t want to reveal your obfuscated passphrase to the world, so how can you be sure that you’ve typed it correctly?
Simple! LessPass adds an emoji triplet to the right of the secret passphrase box. You’ll notice that changes as you type and, when you finish, it should always look the same. If it doesn’t, then you’ve mistyped your passphrase.
I’ll be making the transition from LastPass to LessPass over the next few weeks. It’s not as simple as just exporting from one database into another, as the whole point of doing this is that there is no one place that someone can hoover up my passwords.
So my plan of action is:
- Every time I use a service, create a new password using LessPass.
- Delete existing password in LastPass.
- Rinse and repeat until most of my passwords are generated via LessPass.
- Delete my LastPass account.
- Celebrate my higher levels of personal security.
Questions? Ask away in the comments section!
Facebook, on the other hand, only offers its users a forum to connect and share information. Facebook’s income derives from selling targeted advertising to be delivered to those same users, based on preferences the site has learned from their comments, friends, and preferences. It has no goods or services to sell, and its users don’t buy anything. Thus, its only product to take to market is, in fact, its users’ data. (source)
I don’t use Facebook. You shouldn’t either.
I scraped the trackers on these sites and I was absolutely dumbfounded. Every time someone likes one of these posts on Facebook or visits one of these websites, the scripts are then following you around the web. And this enables data-mining and influencing companies like Cambridge Analytica to precisely target individuals, to follow them around the web, and to send them highly personalised political messages. (Jonathan Albright, source)
Personalised advertising isn’t useful. It’s invasive, and it’s used to build a profile to manipulate you and your ‘friends’.
Using personality targeting, Facebook posts can attract up to 63 percent more clicks and 1,400 more conversions. (source)
There’s several pretty scary implications to where this could take us by 2020:
- Public sentiment as high-frequency trading — algorithms compete to sway the opinions of the electorate / consumers.
- Personalized, automated propaganda — not just lies by politicians, but auto-generated lies created by bots who know which of your ‘buttons’ to press.
- Ideological filter matrices — what happens when all of the other ‘people’ in your Facebook group are actually bots?
So, not only will I not use Facebook, but (like Dave Winer and John Gruber) I won’t link to it. Nor will I accept organisations that I’m part of setting up Facebook groups ‘for convenience’ or using a Facebook page in lieu of a website.
Facebook is designed from the ground up as an all-out attack on the open web. (John Gruber)
The web is a huge force for good. We shouldn’t let inertia and a lack of digital skills turn it into a series of walled data mine.
Get a blog. If your ideas have any value put them on the open web. (Dave Winer)
From a business point of view, you’re mad to put all of your eggs in once basket. Get a website. Facebook’s content is, by design, not indexed by search engines. It’s invisible to search engines.
Look, I get that I’m the nut who doesn’t want to use Facebook. I’m not even saying don’t post your stuff to Facebook. But if Facebook is the only place you are posting something, know that you are shutting out people like me for no good reason. Go ahead and post to Facebook, but post it somewhere else, too. Especially if you’re running a business.
It’s 2017. There are a million ways to get a web site set up inexpensively that you can easily update yourself. Setting up a Facebook page and letting your web site rot, or worse, not even having a web site of your own, is outsourcing your entire online presence. That’s truly insane. It’s a massive risk to your business, and frankly, stupid. (source)
I feel more strongly about Facebook’s threat to the web than I did about Microsoft’s Internet Explorer at the turn of the millennium. Scarily, it looks like Twitter might be going the same way. I blame venture capital and invasive advertisnig.
Header image based on an original by rodrigo