Open Thinkering

Menu

Tag: privacy

Digital Credentials: why context matters

A glass greenhouse reflecting a sunset, surrounded by dense greenery and tall trees.

This post was prompted by attending another excellent AMA session with the Digital Credentials Consortium (DCC) yesterday. The discussion raised a recurring issue in the world of digital credentials: unrealistic expectations. People often expect too much and too little from these technologies, leading (I would argue) to both over-ambitious and underwhelming demands.

Overly ambitious expectations

There are many people, who seem to want digital credentials, such as Verifiable Credentials (VCs), to be universally applicable, solving every conceivable use case across different contexts. This mindset is what led to brief craze for blockchain-based credentials, where similar expectations were placed on distributed ledgers to ‘revolutionise’ various industries. A common desire is for these credentials to be “soul-bound,” linking them irrevocably to an individual’s biometric data. While this might sound like an ideal solution for ensuring security and authenticity, it introduces really quite significant privacy concerns. Tying credentials to biometric details could lead to a scenario where every interaction or transaction becomes a potential privacy risk.

This expectation fails to account for the importance of context in the use of credentials. Even the most critical credentials, such as passports, are context-dependent. Passports are used in specific scenarios—usually, crossing borders—where the context dictates the level of scrutiny and the type of verification required. Translating this into the digital world without considering the context creates unnecessary complexity and risks. Most things are not as critical as passports, so using them as a benchmark doesn’t make much sense.

The importance of context

Credentials are most valuable when they are context-specific. For example, Kerri Lemoie, Director of the DCC, uses the example of proving that you’re of legal drinking age. In this case, it’s unnecessary to disclose your exact birthdate; all that matters is whether you meet the age requirement. This selective disclosure is important, particularly for marginalised communities, such as transgender individuals or refugees, who may need to prove their eligibility or identity without revealing other, potentially sensitive, information.

The challenge here lies in creating digital credentials that respect context while providing the necessary information. It’s not just about what the credential says, but how, when, and where it is used. By designing credentials that allow for selective disclosure, we can protect privacy while ensuring that the credential fulfils its intended purpose.

The evolving nature of digital credentials

Traditional, offline credentials are static: once issued, they don’t change. In contrast, digital credentials offer the potential for evolution. They can be updated, linked to other digital artefacts, and grow in value over time as they gain endorsements or additional evidence.

This dynamism, however, places an unfair burden on credential holders. Employers, for instance, often expect to sift through hundreds or thousands of applications, using credentials as a filtering tool. This approach treats credentials as mere checkboxes, ignoring their potential richness and depth. Instead, there should be a shift towards recognising ‘benchmark’ credentials that indicate a candidate has met the minimum requirements, with further scrutiny placed on unique qualifications or endorsements that set them apart.

Credentials have traditionally been seen as part of wider ‘eportfolios’ but there is less understanding that, these days, credentials can be mini-eportfolios in and of themselves.

The relational aspect of credentials

At their core, all credentials are relational. They represent an attestation from one party to another—a way of saying, “This person did this thing” or “We vouch for this individual.” This relational nature is fundamental to their function, yet it’s often overlooked when people are talking about digital credentials.

The relational aspect becomes even more critical in the digital space. For instance, the concept of ‘trust registries,’ such as the one established by MIT, allows for the verification of credentials without needing to expose the underlying data. Tools like VerifierPlus enable this process, providing a way to check the validity of a credential quickly and efficiently. This is not unlike scanning a passport at a border crossing—simple, effective, and crucial for maintaining trust.

Privacy and security considerations

As digital credentials evolve, privacy concerns should remain at the forefront. It’s critical that credential holders have control over what information they share, and with whom. The current specification for Verifiable Credentials allows for this selective disclosure, but the user experience is still catching up. Some situations may require nothing more than proving control over a credential, while others may necessitate revealing a name or even providing biometric proof.

The key is flexibility. By designing systems that allow for a range of disclosures depending on the context, we can protect privacy while ensuring that credentials remain functional and trustworthy. This requires thoughtful UX design, as the technology must be intuitive and accessible to all users, not just those with technical expertise (as is the case currently).

Conclusion

Digital credentials hold immense potential, but this potential is only realised when we manage expectations, maintain context, and prioritise privacy. By recognising the relational nature of credentials and shifting some of the burden away from credential holders, we can create a more balanced, effective system that respects both individual rights and societal needs.

As we move forward, it’s crucial to keep these considerations in mind, ensuring that digital credentials are not just technically sound, but also socially responsible and practical in their application.


Image: Matej Spulak

Kettled by Big Tech?

Yesterday on Mastodon, I shared with dismay Facebook’s decision to impose ‘login via Facebook account’ on the Oculus range of products. If, like me, you have an Oculus VR headset, but don’t want a Facebook account, then your device is going to become pretty useless to you.

The subsequent discussion included a request not to share links to the Oculus blog due to the number of Facebook trackers on the page. Others replied talking about the need to visit such sites using Firefox multi-account containers, as well as ensuring you have adblockers and other privacy extensions installed. One person likened it to needing an “internet condom” because “it’s a red light district out there”.

I struggle to explain the need for privacy and my anti-Facebook stance to those who can’t just see the associated problems. Sexualised metaphors such as the above are illustrative but not helpful in this regard.

Perhaps a police tactic to contain and disperse protesters might serve as a better analogy?

Kettling (also known as containment or corralling) is a police tactic for controlling large crowds during demonstrations or protests. It involves the formation of large cordons of police officers who then move to contain a crowd within a limited area. Protesters either leave through an exit controlled by the police or are contained, prevented from leaving, and arrested.

Wikipedia

The analogy might seem a little strained. Who are the protesters? Do the police represent Big Tech? What’s a ‘demonstration’ in this context?

However, let’s go one step further…

[K]ettling is sometimes described as “corralling,” likening the tactic to the enclosure of livestock. Although large groups are difficult to control, this can be done by concentrations of police. The tactic prevents the large group breaking into smaller splinters that have to be individually chased down, thus requiring the policing to break into multiple groups. Once the kettle has been formed, the cordon is tightened, which may include the use of baton charges to restrict the territory occupied by the protesters.

Wikipedia

In this situation, the analogy is perhaps a little easier to see. Protesters, who in this case would be privacy advocates and anti-surveillance protesters, are ‘kettled’ by monopolistic practices that effectively force them to get with the program.

Whether it’s Facebook buying Oculus and forcing their data collections practices on users, or websites ‘breaking’ when privacy extensions are active, it all gets a bit tiring.

Which brings us back to kettling. The whole point of this tactic is to wear down protesters:

Peter Waddington, a sociologist and former police officer who helped develop the theory behind kettling, wrote: “I remain firmly of the view that containment succeeds in restoring order by using boredom as its principle weapon, rather than fear as people flee from on-rushing police wielding batons.

Wikipedia

It’s a difficult fight to win, but an important one. We do so through continuing to protests, but also through encouraging one another, communicating, and pushing for changes in laws around monopolies and surveillance.


This post is Day 35 of my #100DaysToOffload challenge. Want to get involved? Find out more at 100daystooffload.com

The auto-suggested life is not worth living

If you use Google products such as Android, Google Docs, or Gmail, you may have noticed more suggestions recently.

On the other hand, suggestions made while I’m composing an email or writing in a Google Doc are a bit different. I find this as annoying as someone else trying to finish my sentences during a conversation. That’s not what I was going to say.

Some of these can be helpful, for example when replying to questions posed via messenging services. There are definitely times when I’m in a hurry and just need to say ‘Okay’ or give a thumbs-up to my wife.

In a recent article for The Art of Manliness, Brett and Kate McKay point out the potential toll of these nudges:

Some of society’s options for living represent time-tested traditions — distillations of centuries of experiments in the art of human flourishing. Many of our mores, however, owe their existence to expediency, conformity, laziness. Practices born from once salient but no longer relevant circumstances are continued from sheer inertia, from that flimsiest of rationalizations: “That’s the way it’s always been done.”

Brett and Kate McKay

The suggestions in Google’s products come from machine learning which is, by definition, looking to the past to predict the future. One way to think about this is as a subtle pressure to conform.

Back in December last year, I was in NYC presenting on surveillance capitalism for a talk entitled Truth, Lies, and Digital Fluency. Riffing on Shoshana Zuboff’s book, I explained that surveillance capitalists want to be able to predict your next move and sell this to advertisers, insurers, and the like.

It’s an approach rooted in behaviourism, the idea that a particular stimulus always leads to a particular response. The closer they can get to that, the more money they can make. It’s true what Aral Balkan has been pointing out for years: we’re being farmed by surveillance capitalists.

Who wants to live this kind of life? But it’s not just the explicit auto-suggestions that we need to be of. Social networks like Facebook and Twitter feed off, and monetise through advertising, the emotions we feel about certain subjects. They are rage machines.

Stimulus: response. Let’s not lose our ability to think, to reason, and (above all) be rational.


This post is Day 33 of my #100DaysToOffload challenge. Want to get involved? Find out more at 100daystooffload.com

css.php