Open Thinkering

Menu

Tag: machine learning

Applying my digital literacies model to ‘AI literacy’

I haven’t yet been able to prioritise thinking deeply about what it would look like to apply my Essential Elements of Digital Literacies to the field of AI.

But hey, why not just ask ChatGPT 4?

AI literacy refers to the understanding of artificial intelligence and the ability to use and interact with AI technologies effectively. It is essential in today’s world, as AI becomes increasingly integrated into various aspects of daily life. To apply Doug Belshaw’s 8 elements of digital literacies to the concept of AI literacy, we can adapt each element to focus on AI-related skills and understanding:

Cultural: Recognizing how AI technologies impact different cultures and societies, as well as understanding the ethical considerations related to AI deployment. This includes being aware of cultural biases in AI systems and acknowledging the implications of AI on the workforce and social structures.

Cognitive: Developing the mental processes involved in using AI tools, such as understanding the basic principles of AI, machine learning, and data analysis. This encompasses skills like interpreting AI-generated insights, recognizing the limitations of AI systems, and understanding the potential consequences of AI decision-making.

Constructive: Gaining the ability to create, modify, and contribute to AI technologies, including working with AI algorithms, designing AI-based solutions, and understanding the software and hardware components of AI systems.

Communicative: Effectively communicating and collaborating with others regarding AI concepts, tools, and applications. This includes being able to explain AI systems to non-experts, discussing ethical and social implications, and sharing AI-related knowledge with others.

Confident: Becoming comfortable and confident in using AI tools and navigating AI-based systems. This involves having the self-assurance to explore, learn, and adapt to new AI technologies and applications.

Creative: Leveraging AI technologies to express oneself creatively and inventively, such as using AI-generated art, music, or writing tools, and applying AI to develop innovative solutions to problems.

Critical: Critically evaluating AI technologies, algorithms, and their implications on society. This involves understanding issues like fairness, accountability, transparency, and privacy in AI systems, as well as assessing the reliability and limitations of AI-generated outputs.

Civic: Engaging with AI technologies to participate in civic activities and social and political life. This includes using AI for social good, advocating for responsible AI policies, and promoting digital equity in AI access and education.

By incorporating these adapted elements into AI literacy, individuals can develop a comprehensive understanding of AI and its impact on society, as well as the skills necessary to thrive in an AI-driven world.

That’s… not bad? I think this would actually be a decent basis to create a framework for an institution that could end up as a curriculum.

All Watched Over by Machines of Loving Grace

They say that as you get older, you get to know yourself better. I think that’s true on several levels: over the last decade in particular I’ve got to know my physical limits and quirks, my emotional temperature in different situations, as well as my spiritual leanings.

Yesterday, I had an opportunity to get to know myself even better by spending six hours in hospital. This, apparently, was unrelated to my previous episode, and followed 45 minutes of literally heart-wrenching pain in the night. If you know the scene from Indiana Jones and the Temple of Doom, the first 15 minutes of that pain felt like the temple priest forcing his hand into the prisoner’s chest and ripping out his heart.

Fun times.

I won’t give a blow-by-blow account, but suffice to say that I was looked after well (as ever) by the NHS with care and attention. The reason I was in for so long was because I had to have two ECGs and two blood tests a certain number of hours apart. This revealed that I had slightly elevated levels of Troponin, a protein released by the heart when it’s damaged. This damage can occur when it’s stressed through exercise, so it’s normal to have some Troponin in the blood, even when you’re otherwise healthy.

I was discharged when the cardiac consultant said he wasn’t too concerned that my Troponin levels were showing 15 when the ‘normal’ scale goes up to 14. I have to go back if I have any problems and I’m allowed to continue my normal exercise regime.


Both yesterday and a couple of weeks ago I found myself having to tell the story of what happened multiple times. As a patient, you’re also kind of expected to remember anything that might be at all relevant, including all of the details of it. My wife works for NHS Digital, so I have a small insight into some of the difficulties of sharing data even within the same hospital, never mind between services.

But it got me thinking.

In the film Her (2013) the main protagonist falls in love with his very human-sounding AI, who acts on his behalf in many different situations. What I’d like is some type of machine learning that works on my behalf with my data, and surfaces potentially-relevant things to healthcare professionals.

With the best will in the world, busy doctors can’t have read every bit of relevant information about every injury and health condition. Nor can they surface data from quite newly-presenting symptoms, for example with heart conditions that may or may not be related to Covid.

I realise this is a very long way off, and that I’ve acted against this by refusing to share my health data with third-party services. But I’d love something to use something that I could actually trust, and provided benefit both to me as a patient and to the healthcare professionals trying to help me. I’m sure people are working on it. I just hope they have patient care instead of $$$ in mind.


Title from Adam Curtis’ excellent documentary series. Image by Deepmind.

Some thoughts on programmatic Open Badge image creation using AI models

Towards the end of yesterday’s meeting of the Open Recognition working group of the Open Skills Network we got on to talking about how it might be possible to programatically create the images for Open Badges when aligning with Rich Skill Descriptors (RSDs).

Creating lots of badges manually is quite the task, and gets in the way of Open Recognition and Keeping Badges Weird. So the first step would be to speed up the process by creating a style guide. Sharing SVG templates that are editable in a wide range of image-editing applications can speed up the process.

For example, for the upcoming Badge Summit, we’re issuing a badge which we want others to be able to issue for their own events. So we asked Bryan Mathers to create an image where the outside would remain the same, but the middle bit could be swapped out easily. Here’s the result:

I Kept Badge Weird at The Badge Summit 2022 badge

The next step would be to have a style guide which makes it faster to create unique badges. That involves a colour palette, font choices, shapes, etc. You can see this in action for our Keep Badges Weird community badges:

Selection of badges available to earn in the Keep Badges Weird community.

Lovely as they are, this is still labour-intensive and time-consuming. So how about we create them programatically? My former Mozilla colleague Andrew Hayward did a great job of this years ago, and the Badge Studio site (code) is still online at the time of posting.

The advantage of programmatically creating badge images is that it (helpfully) constrains what you can do to be in alignment with a style guide. Creating good looking badges can take seconds rather than hours!

An example of a badge image created using Badge Stuudio

However, what if we want to create badge images quickly, and programatically based on certain inputs? There are libraries that can create unique shapes and images based on email address and are used sometimes as the default avatar for platforms. Here, for example, is RoboHash which can be used to create good-looking unique ‘robots’:

Selection of unique robots created using RoboHash

It’s not a huge leap to think about how this could be used to create badge images based on a unique reference from an RSD.

But, if we’re getting a computer to generate something, why not something crazy and unique? Lately, there’s been a lot of noise about different AI models that can be used to generate images based on text input. One of the best-known of these, the images from which have been circulating my corner of social media quite frequently is Craiyon (formerly DALL-E mini). Here’s a really basic attempt:

Nine images created via the Craiyon AI model showing recognition badges

They’re quite… uninspiring and generic? However, they didn’t take any thought or effort on my part.

There are more advanced models than Craiyon, such as Midjourney. This can create stunning images, such as the one featured in Albert Wenger’s post about machine creativity. In fact, it was that post that got me thinking about all this!

You can create up to 25 images using the Midjourney Discord account before paying, so I created this one as quickly as possible using the same prompt as above. You can create variations and upscales, so I asked it to create variations of one of four images created, and the upscaled it to the max. I ended up with the following:

Round patch-style badge (black/yellow with orange shapes)

This is also quite boring, to be fair, but the awesome and weird thing about doing this in Discord is that you see the prompts that other people are entering to create image — e.g. ‘huge potato chip eating a bag of humans’ or ‘rainbow slushy trippy wallpaper’. I noticed that there were certain prompts that led to amazing outputs, so I tried ‘rainbow waterfall in a hexagon,bright,trippy’ and got these options:

rainbows and hexagons, AI created art

The bottom-left image looked potentially interesting, so I asked for variations and then upscaled one of them. I then just cropped it into a 12-sided shape and ended up with the following. I guarantee it’s one of the most unique badge images you’ll have seen recently!

12-sided rainbow hexagon images

The point is that there’s almost infinite variations here. And, as I found, getting the words right, and then doing variations and upscaling is actually quite a creative process!

As ever, I don’t have the technical skills to stitch all of this together, but I guess my job is to encourage those who do in particularly fruitful directions. The workflow would go something like:

  1. Community decides new RSDs
  2. Organisation or individual creates badge metadata aligning with one or more RSD
  3. AI model generates badge image

I should imagine a lot of this could be automated so that badges that align with a particular RSD could have visual similarity.

This could be amazing. Anyone want to give it a try? 🤩

css.php