Open Thinkering

Menu

Tag: privacy

Kettled by Big Tech?

Yesterday on Mastodon, I shared with dismay Facebook’s decision to impose ‘login via Facebook account’ on the Oculus range of products. If, like me, you have an Oculus VR headset, but don’t want a Facebook account, then your device is going to become pretty useless to you.

The subsequent discussion included a request not to share links to the Oculus blog due to the number of Facebook trackers on the page. Others replied talking about the need to visit such sites using Firefox multi-account containers, as well as ensuring you have adblockers and other privacy extensions installed. One person likened it to needing an “internet condom” because “it’s a red light district out there”.

I struggle to explain the need for privacy and my anti-Facebook stance to those who can’t just see the associated problems. Sexualised metaphors such as the above are illustrative but not helpful in this regard.

Perhaps a police tactic to contain and disperse protesters might serve as a better analogy?

Kettling (also known as containment or corralling) is a police tactic for controlling large crowds during demonstrations or protests. It involves the formation of large cordons of police officers who then move to contain a crowd within a limited area. Protesters either leave through an exit controlled by the police or are contained, prevented from leaving, and arrested.

Wikipedia

The analogy might seem a little strained. Who are the protesters? Do the police represent Big Tech? What’s a ‘demonstration’ in this context?

However, let’s go one step further…

[K]ettling is sometimes described as “corralling,” likening the tactic to the enclosure of livestock. Although large groups are difficult to control, this can be done by concentrations of police. The tactic prevents the large group breaking into smaller splinters that have to be individually chased down, thus requiring the policing to break into multiple groups. Once the kettle has been formed, the cordon is tightened, which may include the use of baton charges to restrict the territory occupied by the protesters.

Wikipedia

In this situation, the analogy is perhaps a little easier to see. Protesters, who in this case would be privacy advocates and anti-surveillance protesters, are ‘kettled’ by monopolistic practices that effectively force them to get with the program.

Whether it’s Facebook buying Oculus and forcing their data collections practices on users, or websites ‘breaking’ when privacy extensions are active, it all gets a bit tiring.

Which brings us back to kettling. The whole point of this tactic is to wear down protesters:

Peter Waddington, a sociologist and former police officer who helped develop the theory behind kettling, wrote: “I remain firmly of the view that containment succeeds in restoring order by using boredom as its principle weapon, rather than fear as people flee from on-rushing police wielding batons.

Wikipedia

It’s a difficult fight to win, but an important one. We do so through continuing to protests, but also through encouraging one another, communicating, and pushing for changes in laws around monopolies and surveillance.


This post is Day 35 of my #100DaysToOffload challenge. Want to get involved? Find out more at 100daystooffload.com

The auto-suggested life is not worth living

If you use Google products such as Android, Google Docs, or Gmail, you may have noticed more suggestions recently.

On the other hand, suggestions made while I’m composing an email or writing in a Google Doc are a bit different. I find this as annoying as someone else trying to finish my sentences during a conversation. That’s not what I was going to say.

Some of these can be helpful, for example when replying to questions posed via messenging services. There are definitely times when I’m in a hurry and just need to say ‘Okay’ or give a thumbs-up to my wife.

In a recent article for The Art of Manliness, Brett and Kate McKay point out the potential toll of these nudges:

Some of society’s options for living represent time-tested traditions — distillations of centuries of experiments in the art of human flourishing. Many of our mores, however, owe their existence to expediency, conformity, laziness. Practices born from once salient but no longer relevant circumstances are continued from sheer inertia, from that flimsiest of rationalizations: “That’s the way it’s always been done.”

Brett and Kate McKay

The suggestions in Google’s products come from machine learning which is, by definition, looking to the past to predict the future. One way to think about this is as a subtle pressure to conform.

Back in December last year, I was in NYC presenting on surveillance capitalism for a talk entitled Truth, Lies, and Digital Fluency. Riffing on Shoshana Zuboff’s book, I explained that surveillance capitalists want to be able to predict your next move and sell this to advertisers, insurers, and the like.

It’s an approach rooted in behaviourism, the idea that a particular stimulus always leads to a particular response. The closer they can get to that, the more money they can make. It’s true what Aral Balkan has been pointing out for years: we’re being farmed by surveillance capitalists.

Who wants to live this kind of life? But it’s not just the explicit auto-suggestions that we need to be of. Social networks like Facebook and Twitter feed off, and monetise through advertising, the emotions we feel about certain subjects. They are rage machines.

Stimulus: response. Let’s not lose our ability to think, to reason, and (above all) be rational.


This post is Day 33 of my #100DaysToOffload challenge. Want to get involved? Find out more at 100daystooffload.com

Herd immunity for privacy

Self-hosting is the holy grail for privacy advocates. And I don’t mean having a VPS hosted for you somewhere; I mean having your server physically located on your own premises.

Messaging, including email, is particularly important when it comes to privacy. Now, there are three reasons I choose not to run my own email server:

  1. I have no desire to be a sysadmin, and these things can be fiddly to set up and subject to downtime.
  2. Due to the preponderance of spam, the big players have developed procedures and policies making it difficult for self-hosters to get their emails delivered.
  3. If my focus is privacy, well almost everyone else I will contact uses Google, Microsoft or Apple, meaning Big Tech will get my data anyway.

The third point is an important one to dwell upon, and is the reason why I continue to argue for privacy even in the midst of a pandemic. I can take all the defensive actions I like, but if my family and friends don’t change their practices, then I’m going to get diminishing returns.

In addition to the email example above, consider the following scenarios:

  • Images — you have to be part of a social network to stop people being able to tag you, which is a bit of a dilemma if someone tags me in a photograph on Facebook or Instagram (where I don’t have an account)
  • Location — when I travel, I’m often with family or friends so if they’re sharing their location, my location is also being shared.
  • Tracking — when using shared computers it’s not difficult for Big Tech to associate accounts coming from the same residential IP address to make inferences .

This all might sound a bit tinfoil hat, but privacy is the reason we have curtains on our windows and why we don’t tell everyone what we’re doing all of the time.

I realise that we can’t turn the clock back, and goodness know privacy advocates have made some missteps along the way. But now we live in a world where both governments and Big Tech have a vested interest in the general public lacking what I’d call ‘herd immunity for privacy’.

So although it seems like somewhat of a futile task at times, I’ll continue to pragmatically protect my own privacy, and encourage those around me to do likewise.


This post is Day 26 of my #100DaysToOffload challenge. Want to get involved? Find out more at 100daystooffload.com

We’re the real losers of realtime behavioural advertising auctions

Like many people in my immediate networks, I think behavioural advertising is rotting the web. It’s the reason that I have four different privacy-focused extensions in my web browser and use a privacy-focused web browser on my smartphone.

As a result, when I go start looking for some new running shoes, as I have this week, some that I considered buying yesterday don’t ‘follow me around the web’ today, popping up in other sites and tempting me to buy them.

The political implications of this behavioural advertising are increasingly well-known after the surprise results of the US Presidental election and Brexit a few years ago. Advertisers participate in real-time auctions for access to particular demographics.

But what’s less well-known, and just as important, is what happens to the losers of the realtime auctions when you visit a site.

Say you visit the Washington Post. Dozens of brokers bid on the chance to advertise to you. All but one of them loses the auction. But every one of those losers gets to add a tag to its dossier on you: “Washington Post reader.”

Advertising on the Washington Post is expensive. “Washington Post reader” is a valuable category unto itself: a lot of blue-chip firms will draw up marketing plans that say, “Make sure we tell Washington Post readers about this product!”

Here’s the thing: the companies want to advertise to Washington Post readers, but they don’t care about advertising in the Washington Post. And now there are dozens of auction “losers” who can sell the right to advertise to you, as a Post reader, when you visit cheaper sites.

When you click through one of those dreadful “Here’s 22 reasons to put a rubber band on your hotel room’s door handle” websites, every one of those 22 pages can be sold to advertisers who want to reach Post readers, at a fraction of what the Post charges.

Cory Doctorow, Pluralistic

I kind of knew this, but it’s useful to have it explained in such a succinct way by Doctorow.

So if you’re not currently performing self-defence against behavioural advertising, here’s what I use in Firefox on my desktop and laptop:

These overlap one another to a great extent, but good things happen when I use all three in tandem. On mobile, I rely on Firefox Focus and Blokada.

You might also be interested in a microcast I recorded back in January for Thought Shrapnel on the Firefox extensions I use on a daily basis.


This post is Day 25 of my #100DaysToOffload challenge. Want to get involved? Find out more at 100daystooffload.com

Practice what you preach

I spend a lot of time looking at screens and interacting with other people in a mediated way through digital technologies. That’s why it’s important to continually review the means by which I communicate with others, either synchronously (e.g. through a chat app or video conference software) or asynchronously (e.g. via email or this blog).

When I started following a bunch of people who are using the #100DaysToOffload hashtag, some of them followed me back:



@dajbelshaw you have a really beautiful site that doesn't open for me. First it's not compatible with LibreJs and then uMatrix block Cloudflare's ajax and you'll not get further than loading screen.

I know that some people are quite hardcore about not loading JavaScript for privacy reasons, but I didn’t know what ‘LibreJs’ was. Although uMatrix rang a bell, I thought it would be a good opportunity to find out more.


It turns out LibreJS is a browser extension maintained by the GNU project:

GNU LibreJS aims to address the JavaScript problem described in Richard Stallman’s article The JavaScript Trap. LibreJS is a free add-on for GNU IceCat and other Mozilla-based browsers. It blocks nonfree nontrivial JavaScript while allowing JavaScript that is free and/or trivial.

Meanwhile uMatrix seems to be another browser extension that adds a kind of ‘firewall’ to page loading:

Point & click to forbid/allow any class of requests made by your browser. Use it to block scripts, iframes, ads, facebook, etc.

Meanwhile, the extensions that I use when browsing the web to maintain some semblance of privacy, and to block annoying advertising, are:


So just running the tools I use on my own site leads to the following:

Privacy Badger found 18 potential trackers on dougbelshaw.com:

web.archive.org
ajax.cloudflare.com
assets.digitalclimatestrike.net
www.google-analytics.com
docs.google.com
play.google.com
lh3.googleusercontent.com
lh4.googleusercontent.com
lh5.googleusercontent.com
lh6.googleusercontent.com
licensebuttons.net
www.loom.com
public-api.wordpress.com
pixel.wp.com
s0.wp.com
s1.wp.com
stats.wp.com
widgets.wp.com

Disconnect produced a graph which shows the scale of the problem:

Graph produced by Disconnect showing trackers for dougbelshwa.com

This was the output from uBlock Origin:

Output from uBlock Origin for dougbelshaw.com

It’s entirely possible to make a blog that involves no JavaScript or trackers. It’s just that, to also make it look nice, you have to do some additional work.

I’m going to start the process of removing as many of these trackers as I can from my blog. It’s really is insidious how additional functionality and ease-of-use for blog owners adds to the tracking burden for those reading their output.

Recently, I embedded a Google Slides deck in a weeknote I wrote. I’m genuinely shocked at how many trackers just including that embed added to my blog: 84! Suffice to say that I’ve replaced it with an archive.org embed.

I was surprised to see the Privacy Badger was reporting tracking by Facebook and Pinterest. I’m particularly hostile to Facebook services, and don’t use any of them (including WhatsApp and Instagram). Upon further investigation, it turns out that even if you have ‘share to X’ buttons turned off, Jetpack still allows social networks to phone home. So that’s gone, too.


There’s still work to be done here, including a new theme that doesn’t include Google Fonts. I’m also a bit baffled by what’s using Google Analytics, and I’ll need to stop using Cloudflare as a CDN.

But, as ever, it’s a work in progress and, as Antoine de Saint-Exupéry famously said, “Perfection is achieved when there is nothing left to take away.”


This post is day two of my #100DaysToOffload challenge. Want to get involved? Find out more at 100daystooffload.com


Header image by Gordon Johnson

More on the mechanics of GDPR

Note: I’m writing this post on my personal blog as I’m still learning about GDPR. This is me thinking out loud, rather than making official Moodle pronouncements.


‘Enjoyment’ and ‘compliance-focused courses’ are rarely uttered in the same breath. I have, however, enjoyed my second week of learning from Futurelearn’s course on Understanding the General Data Protection Regulation. This post summarises some of my learning and builds upon my previous post.

This week, the focus was on the rights of data subjects, and started with a discussion about the ‘modalities’ by which communication between the data controller and processor, and the data subject take place:

By modalities, we mean different mechanisms that are used to facilitate the exercise of data subjects’ rights under the GDPR, such as those relating to different forms of information provision (in writing, spoken, electronically) and other actions to be taken when data subjects invoke their rights.

Although the videos could be improved (I just use the transcripts) the mix of real-world examples, quizzes, and reflection is great and suits the way I learn best.

I discovered that the GDPR not only makes provision for what should be communicated by data controllers but how this should be done:

In the first place, measures must be taken by data controllers to provide any information or any communication relating to the processing to these individuals in a concise, transparent, intelligible and easily accessible form, using the language that is clear and plain. For instance, it should be done when personal data are collected from data subjects or when the latter exercise their rights, such as the right of access. This requirement of transparent information and communication is especially important when children are data subjects.

Moreover, unless the data subject is somehow attempting to abuse the GDPR’s provisions, the data controller must provide the requested information free of charge.

The number of times my surname is spelled incorrectly (often ‘Bellshaw’) or companies have other details incorrect, is astounding. It’s good to know, therefore, that the GDPR focuses on rectification of individuals’ personal data:

In addition, the GDPR contains another essential right that cannot be disregarded. This is the right to rectification. If controllers store personal data of individuals, the latter are further entitled to the right to rectify, without any undue delay, inaccurate information concerning them. Considering the purpose of the processing, any data subject has the right to have his or her personal data completed such as, for instance, by providing a supplementary statement.

So far, I’ve focused on me as a user of technologies — and, indeed, the course uses Google’s services as an example. However, as lead for Project MoodleNet, the reason I’m doing this course is as the representative of Moodle, an organisation that would be both data controller and processor.

There are specific things that must be built into any system that collects personal data:

At the time of the first communication with data subjects, the existence of the right to object– as addressed earlier– must be indicated to data subjects in a clear manner and separately from other information. This right can be exercised by data subjects when we deal with the use of information society services by automated means using technical specifications. Importantly, the right to object also exists when individuals’ personal data are processed for scientific or historical research or statistical purposes. This is, however, not the case if the processing is carried out for reasons of public interest.

Project MoodleNet will be a valuable service, but not from a scientific, historical, or statistical point of view. Nor will the data processing be carrierd out for reasons of public interest. As such, the ‘right to object’ should be set out clearly when users sign up for the service.

In addition, users need to be able to move their data out of the service and erase what was previously there:

The right to erasure is sometimes known as the right to be forgotten, though this denomination is not entirely correct. Data subjects have the right to obtain from data controllers the erasure of personal data concerning them without undue delay.

I’m not entirely clear what ‘undue delay’ means in practice, but when building systems, we should build it with these things in mind. Being able to add, modify, and delete information is a key part of a social network. I wonder what happens when blockchain is involved, given it’s immutable?

The thing that concerns most organisations when it comes to GDPR is Article 79, which states that data subjects have legal recourse if they’re not happy with the response they receive:

Furthermore, we should mention the right to an effective judicial remedy against a controller or processor laid down in Article 79. It allows data subjects to initiate proceedings against data controllers or processors before a court of the Member State of the establishment of controllers or processors or in the Member State where they have their habitual residence unless controllers or processors are public authorities of the Member States and exercise their public powers. Thus, data subjects can directly complain before a judicial institution against controllers and processors, such as Google or others.

I’m particularly interested in what effect data subjects having the right “not to be subjected to automated individual decision-making” will have. I can’t help but think that (as Google has already started to do through granular opt-in questions) organisations will find ways to make users feel like it’s in their best interests. They already do that with ‘personalised advertising’.

There’s a certain amount of automation that can be useful, the standard example being Amazon’s recommendations system. However, I think the GDPR focuses more on things like decisions about whether or not to give you insurance based on your social media profile:

There are three additional rights of data subjects laid down in the General Data Protection Regulation, and we will cover them here. These rights are – the right not to be subjected to automated individual decision-making, the right to be represented by organisations and others, and the right to compensation. Given that we live in a technologically advanced society, many decisions can be taken by the systems in an automatic manner. The GDPR grants to all of us a right not to be subjected to a decision that is based only on an automated processing, which includes profiling. This decision must significantly affect an individual, for example, by creating certain legal effects.

Thankfully, when it comes to challenging organisations on the provisions of the GDPR, data subjects can delegate their representation to a non-profit organisation. This is a sensible step, and prevents lawyers become rich from GDPR challenges. Otherwise, I can imagine data sovereignty becoming the next personal injury industry.

If an individual feels that he or she can better give away his or her representation to somebody else, this individual has the right to contact a not-for-profit association– such as European Digital Rights – in order to be represented by it in filing complaints, exercising some of his or her rights, and receiving compensation. This might be useful if an action is to be taken against such a tech giant as Google or any other person or entity. Finally, persons who have suffered material or non-material damage as a result of an infringement of the GDPR have the right to receive compensation from the controller or processor in question.

Finally, and given that the GDPR applies not only across European countries, but to any organisation that processes EU citizen data, the following is interesting:

The European Union and its Member States cannot simply impose restrictions addressed in Article 23 GDPR when they wish to. These restrictions must respect the essence of the fundamental rights and freedoms and be in line with the requirements of the EU Charter of Fundamental Rights and the European Convention for the Protection of Human Rights and Fundamental Freedoms. In addition, they are required to constitute necessary and proportionate measures in a democratic society meaning that there must be a pressing social need to adopt these legal instruments and that they must be proportionate to the pursued legitimate aim. Also, they must be aiming to safeguard certain important interests. So, laws adopted by the EU of its Members States that seek to restrict the scope of data subjects’ rights are required to be necessary and proportionate and must protect various interests discussed below.

I learned a lot this week which will stand me in good stead as we design Project MoodleNet. I’m looking forward to putting all this into practice!


Image by Erol Ahmed available under a CC0 license

Social networking and GDPR

Note: I’m writing this post on my personal blog as I’m still learning about GDPR. This is me thinking out loud, rather than making official Moodle pronouncements.


I have to admit to EU directive fatigue when it comes to technology (remember the ‘cookie law‘?) so when I heard about the General Data Protect Regulation (GDPR), I didn’t give it the attention it deserved.

The GDPR is actually pretty awesome, and exactly the kind of thing we need in this technologically-mediated world. It has wide-ranging impact, even beyond Europe. In fact, it’s likely to set the standard for the processing of user information, privacy, and security from May 2018 onwards.

So, on the advice of Gavin Henrick, I’m in the midst of Futurelearn’s course on Understanding the General Data Protection Regulation. The content is great but, unlike Mary Cooch‘s excellent videos for the Learn Moodle Basics 3.4 course (which I’m also doing at the moment) I don’t find the videos helpful. They don’t add anything, so it’s a more efficient use of my time to read the transcripts.

All of this is prologue to say that GDPR affects the work I’m leading at the moment with Project MoodleNet. It may be in its early stages, but privacy by design (PDF) means that we need to anticipate potential issues:

The Privacy by Design approach is characterized by proactive rather than reactive measures. It anticipates and prevents privacy invasive events before they happen. PbD does not wait for privacy risks to materialize, nor does it offer remedies for resolving privacy infractions once they have occurred − it aims to prevent them from occurring. In short, Privacy by Design comes before-the-fact, not after.

Project MoodleNet is a social network for educators focused on professional development and the sharing of open content. As such, it’s a prime example of where GDPR can protect and empower users.

Article 5(1) from the official document states:

Personal data shall be:

(a) processed lawfully, fairly and in a transparent manner in relation to the data subject (‘lawfulness, fairness and transparency’);

(b) collected for specified, explicit and legitimate purposes and not further processed in a manner that is incompatible with those purposes; further processing for archiving purposes in the public interest, scientific or historical research purposes or statistical purposes shall, in accordance with Article 89(1), not be considered to be incompatible with the initial purposes (‘purpose limitation’);

(c) adequate, relevant and limited to what is necessary in relation to the purposes for which they are processed (‘data minimisation’);

(d) accurate and, where necessary, kept up to date; every reasonable step must be taken to ensure that personal data that are inaccurate, having regard to the purposes for which they are processed, are erased or rectified without delay (‘accuracy’);

(e) kept in a form which permits identification of data subjects for no longer than is necessary for the purposes for which the personal data are processed; personal data may be stored for longer periods insofar as the personal data will be processed solely for archiving purposes in the public interest, scientific or historical research purposes or statistical purposes in accordance with Article 89(1) subject to implementation of the appropriate technical and organisational measures required by this Regulation in order to safeguard the rights and freedoms of the data subject (‘storage limitation’);

(f) processed in a manner that ensures appropriate security of the personal data, including protection against unauthorised or unlawful processing and against accidental loss, destruction or damage, using appropriate technical or organisational measures (‘integrity and confidentiality’).

We’re kicking off Project MoodleNet by looking at all of the different components we’ll be building, and really zeroing-in on user control. A key part of that is the way(s) in which users can authenticate and are authorised to access different parts of the system. We’re exploring open source approaches such as gluu (which has been GDPR-ready since November) that make it both easy for the user while protecting their privacy.

In addition, and as I’ve touched on while writing at the project blog, we’re going to need to ensure that users can, at the very least:

  • see what data is held on them
  • choose whether to revoke consent around storage and processing of that data
  • request a data export
  • ask for any of their personal data to be securely destroyed.

I actually think Google do a pretty good job with most of this with Download your data in your account settings (formerly ‘Google Takeout’).

One challenge, I think, is going to be global search functionality. To make searching across people, resources, and news reasonably fast, there’s going to be some pre-caching involved. We need to explore to what extent that’s compatible with purpose limitation, data minimisation, and storage limitation. It may be that, as with the authentication/authorisation example above, it may already be somewhat of a solved problem.

A related issue is that different functionality may be used to a greater or lesser extent by users. Some (e.g. crowdfunding) may not be used by some educators at all. As such, we need to ensure that, perhaps through an approach that leans on microservices and APIs, we ensure integrity and confidentiality of user data, while again adhering to the principle of data minimisation.

I’m delighted to be working on this project at such an exciting time for user control and privacy. Organisations that have been wilfully neglecting controls and safeguards around user data, or monetising it in unethical ways, are going to be in for a rough ride. Those, however, that have a commitment to openness and follow the principles of privacy by design are going to find that it’s a competitive advantage!


Image by Sanwal Deen available under a CC0 license

Why I just deleted all 77.5k tweets I’ve sent out over the last 10 years

Earlier this year, when Twitter changed their terms and conditions, I resolved to spend more time on Mastodon, the decentralised social network. In particular, I’ve been hanging out at social.coop, which I co-own with the other users of the instance.

Today, I deleted all 77.5k of my tweets using Cardigan, an open source tool named after the Swedish band The Cardigans (and their 90s hit ‘Erase/Rewind’):

Yes, I said it’s fine before
But I don’t think so no more
I said it’s fine before
I’ve changed my mind, I take it back

Erase and rewind
‘Cause I’ve been changing my mind

Why delete all my tweets? Because I’m sick of feeling like a slow-boiled frog. Twitter have updated their terms and conditions again, and now this service that used to be on the side of liberty is becoming a tool for the oppressor, the data miner, the quick-buck-making venture capitalist.

I’m out. I’ll continue posting links to my work, but that’s it. Consider it an alternative to my RSS feeds.

Deleting my tweets was a pretty simple process: I simply downloaded my Twitter archive and then upload it into Cardigan. This enabled me to delete all my tweets, not just the last 3,200.

The upside of doing this is that I could take my Twitter archive and upload it to a subdomain under my control, in this case twitter.dougbelshaw.com. All of my tweets are preserved in a really nicely-searchable way. Kudos to Twitter for making that so easy.

In addition, I realised that deleting my Twitter ‘likes’ (I’ll always call them ‘favourites’) was probably a good idea — all 31.4k of them. They’re not much use to me, but they can be data mined in some pretty scary ways, if Facebook is anything to go by.

I used Fav Cleaner (note: this service auto-tweets once on your behalf) to delete my Twitter likes/favourites. It’s limited to deleting 3,204 at a time, so I’ve left it running on a pinned tab and am returning to it periodically to set it off again. I may need to use something like Unfav.me as well.

To finally do this feels quite liberating. As a consultant, I often point out to clients when they’re exhibiting tendencies towards the sunk cost fallacy. In this case, I was showing signs myself! Just because using Twitter has been of (huge) value for me in the past, doesn’t mean it will be, or in the same way, in future.


Postscript: at the time of writing, Twitter’s still showing me as having tweeted a grand total of 67 tweets. However, it seems my timeline actually nly features one tweet; something I retweeted back in 2016 — and can’t seem to un-retweet. I think it’s oddly fitting:

Ready to make the jump to? I’m happy to answer your questions, I would love to connect with you on Mastodon. I can be found here: social.coop/@dajbelshaw.

Indie Tech Summit: On raising the next generation [VIDEO]

On U.S. Independence Day this year I was in Brighton (England) for the Indie Tech Summit. The focus was on discussing sustainable & ethical alternatives to corporate surveillance. Aral Balkan, the organiser, invited me to speak after we had a long discussion when I crashed the Thinking Digital closing party and I wrote this blog post.

All of the videos from the Summit are now up, and the Indie Tech team have done a great job with them. Here’s mine:

(not showing? click here or here)

The slides I used can be found on Slideshare and a full verbatim transcription of the talk is on this page.

I’d be interested in your reaction to what I have to say in this talk, especially if you’re involved in formal education in any way (educator, parent, etc.)

So here’s the problem…

Note: I’m kind of riffing off Everything Is Broken here. You should read that first.


I often think about leaving Twitter; about turning my annual Black Ops hiatus into something more… permanent.

The trouble is, I can’t.

I don’t mean in terms of “I don’t have it in me”, or “I’d prefer a better platform”. I mean that, if I did leave Twitter, I wouldn’t be able to fulfil my current role to the standard people have come to expect. In other words, there would be a professional cost to me not using a public, private space to communicate with others.

In fact, the same goes with Skype, Google+, and other proprietary tools: I could switch, but there’s de facto standards at work here. If you don’t use what everyone else does, then you either (a) suffer a productivity hit, or (b) cause other people problems. Sometimes, it’s both.

By a ‘productivity hit’, I mean there’s a cognitive and cultural overhead of using tools outside the norm. I spoke to one person the other day – not a Mozilla employee – who said that their company’s commitment to security, privacy and Open Source software significantly hampers their productivity. In other words, they were trading some ease-of-use and productivity for data ownership, privacy and security.

By ’cause other people problems’ I mean that, particularly in the fast-moving world I inhabit, you don’t want to be slowed down by negotiations around which technology to use. Much as I’d love to migrate to WebRTC-powered apps such as appear.in, the truth is that Skype pretty much works every time. You can rely on almost everyone having it installed.*

It used to be easier to understand. Companies would sell their software which you would install on your computer. Most ‘free’ software was also ‘Open Source’ and available under a permissive license. Now, however, everything is free, and the difference between the following is confusing for the end user:

  • Free as in beer – you get this thing for free, but there’s a catch! (the company is mining and/or selling your personal data to advertisers/insurers)
  • Free as in speech – you get this thing for free, and you can inspect the code and use it for pretty much whatever you want.

As Vinay Gupta often puts it, a lot of the free apps and software we’re accessing these days are a form of legalised spyware. The only reason we don’t call it that is because the software providing the services and doing the spying resides on their servers. Our shorthand for this is ‘the cloud’.

The trouble is, and let’s be honest here, that apart from the big hitters like Ubuntu and Firefox, the the free-as-in-beer software tends to have better UX than the free-as-in-speech software. It’s not enough to have stand-alone apps and software any more – customers demand that services talk to one another. And rightly so. The problem is that unless you’re burning through VC cash or selling user data to advertisers, it’s difficult to fund this kind of stuff. Someone or something has got to pay for the servers.

To conclude, I’m kind of done with thinking of this as an individual problem for me to solve in isolation. Yes, I could sit on an island by myself running BSD and only using super-secure and private apps/services. But I’d be a pariah. What we’ve got here is a cultural, not a technological, problem: it’s something for us all to fix:

It wouldn’t take a total defection or a general revolt to change everything, because corporations and governments would rather bend to demands than die. These entities do everything they can get away with — but we’ve forgotten that we’re the ones that are letting them get away with things.

The above quotation is from the article I suggested that you read at the top of this post. If you still haven’t done so yet, then read it when you finish this one.

Remember: there’s not loads we can do in isolation – especially given the mindboggling complexity of the whole system. But we can talk with others about the situation in which we find ourselves. We can weave it into our conversations. We can join together in solidarity and, where there’s opportunities, we can take informed action.

All of us need to up our game when it comes to the digital literacies and web literacy necessary to operate in this Brave New World. We shouldn’t be embarrassed about this in any way. After all, we’re collectively making it up as we go along.


*I think of Skype a bit like LinkedIn. No-one’s over the moon about using it, but until everyone migrates somewhere else, it’s what we’re stuck with.

css.php