Weeknote 12/2023
I boiled snow for the first time this morning. Last night, I wild camped somewhere in The Cheviots as the clocks ‘sprang’ forward. Waking up before dawn, I put my iPod on shuffle, skipped one track and listened to Surprise Ice by Kings of Convenience. The song couldn’t have been more apt, given that my tent was covered in snow and ice!
The overnight camp was in preparation for walking at least half of The Pennine Way in a few weeks’ time. I’ve got all the kit I need, so I was just testing the new stuff out and making sure the existing stuff was still in good working order. The good news is that it’s very unlikely to get colder during my walk than it did last night, and I was warm enough to sleep!
This week, I’ve been helping WAO finish off our work (for now) with Passbolt and Sport England, continuing some digital strategy stuff for the Wellbeing Economy Alliance, doing some work around Greenpeace and KBW. I updated a resource I’d drafted on open working for Catalyst, and put together a proposal for some badges work under the auspices of Dynamic Skillset.
We had a co-op half day on Tuesday in which we ran, and eventually passed, a proposal about experimenting with a ‘drip release’ model for our content. Essentially, this would mean that we would have patrons (platform TBD) who would get our stuff first, and then everything would be open a few weeks later. This emerged from an activity of us individually coming up with a roadmap for WAO for the next few years. We were amazingly well-aligned, as you’d hope and expect!
This week, I published:
- Sharepidation
- 5 reasons why microcredentials are not Open Badges in name, spirit, or ethos
- Realigning Microcredentials with Open Badges
I also helped a little with this post from Laura, and she helped me with one that I’ve written but has yet to be published. I’ve also drafted another couple of posts and an email-based course. I also (with a little help) created a weather app using the OpenWeatherMap API. Which brings us onto…
I’ve continued to find ChatGPT 4 really useful in my work this week. It’s like having a willing assistant always ready. And just like an assistant, it sometimes gets things wrong, makes things up, and a lot of the time you have domain expertise that they don’t. AI-related stuff is all over the place at the moment, especially LinkedIn, and I share the following links mainly for future me looking back.
While I got access to Google Bard a few days ago, the experience Google currently provides feels light years behind OpenAI’s offering. This week there were almost too many AI announcements to keep up with, so I’ll just note that ChatGPT was connected to internet this week. Previously it just relied on a training model that cut off in 2021. Also, OpenAI have announced plugins which look useful, although I don’t seem to have access to them yet.
There are lots of ways to be productive with ChatGPT, and this Hacker News thread gives some examples. I notice that there’s quite a few people giving very personal information to it, with a few using it as a therapist. As Tristan Harris and Aza Raskin point out in the most recent episode of their podcast Your Undivided Attention, AI companies encourage this level of intimacy, as it means more data. However, what are we unleashing? Where are the checks and balances?
Writing in Jacobin, Nathan J. Robinson explains that the problem with AI is the problem with capitalism. Robinson’s attitude reflects my own:
It’s interesting that we talk about jobs being “at risk” of being automated. Under a socialist economic system, automating many jobs would be a good thing: another step down the road to a world in which robots do the hard work and everyone enjoys abundance. We should be able to be excited if legal documents can be written by a computer. Who wants to spend all day writing legal documents? But we can’t be excited about it, because we live under capitalism, and we know that if paralegal work is automated, that’s over three hundred thousand people who face the prospect of trying to find work knowing their years of experience and training are economically useless.
We shouldn’t have to fear AI. Frankly, I’d love it if a machine could edit magazine articles for me and I could sit on the beach. But I’m afraid of it, because I make a living editing magazine articles and need to keep a roof over my head. If someone could make and sell an equally good rival magazine for close to free, I wouldn’t be able to support myself through what I do. The same is true of everyone who works for a living in the present economic system. They have to be terrified by automation, because the value of labor matters a lot, and huge fluctuations in its value put all of one’s hopes and dreams in peril.
If ChatGPT is going to revolutionise the economy, we should probably decide what that should look like. Otherwise, we’re running the risk of Feudalism 2.0. We’ve heard the hyperbole before, but if AI systems are exhibiting ‘sparks’ of artificial general intelligence (AGI) then we shouldn’t be experimenting on the general population. Perhaps Nick Cave is correct and that the problems with the world are “certitude and indifference”.
Next week is my last before taking three weeks off. I’m very much looking forward to a family holiday and am psyching myself up for my long walk. Ideally, I’d like to do the whole 268 miles in one go over a two-week period. But I don’t think my family (or my body!) would be up for that…