Open Thinkering

Menu

Month: March 2024

Hope vs optimism

Last night I went to a local climate action reading group. As I have learned to do, I listened to what was being said before speaking, noticing the word ‘hope’ had come up a number of times. I wondered out loud whether hope was something that we needed to have to respond to the climate emergency?

Another word that you could use instead of hope is ‘optimism’. Semantics, maybe, but the way that I understand hope is that you’re looking to other people to save you. ‘Optimism’, on the other hand, is something that you generate yourself, intentionally. As Gramsci put it:

You must realize that I am far from feeling beaten…it seems to me that… a man out to be deeply convinced that the source of his own moral force is in himself — his very energy and will, the iron coherence of ends and means — that he never falls into those vulgar, banal moods, pessimism and optimism. My own state of mind synthesises these two feelings and transcends them: my mind is pessimistic, but my will is optimistic. Whatever the situation, I imagine the worst that could happen in order to summon up all my reserves and will power to overcome every obstacle.

(my emphasis)

In other words, having a clear-eyed picture of what is going on (pessimism of the intellect) can be useful in summoning up an internal state to do something about it (optimism of the will). This then enables us to band together to enact change, rather than simply picking through various bits of climate news for the things which are ‘hopeful’.


I’m an admirer of Adam Greenfield’s work, and was delighted to have the opportunity to interview him last week for our podcast. I took along his article from last year about ‘Lifehouses’ and read the first half of this bit to the group:

Here’s the crux of it: local communities should assume control over underutilized churches, and convert them to Lifehouses, facilities designed to help people ride out not merely the depredations of neoliberal austerity, but the still-harsher circumstances they face in what I call the Long Emergency, the extended period of climatic chaos we’ve now entered. This means fitting them out as decentralized shelters for the unhoused, storehouses for emergency food stocks (rotated through an attached food bank), heating and cooling centers for the physically vulnerable, and distributed water-purification, power-generation and urban-agriculture sites capable of supporting the neighborhood around them when the ordinary sources of supply become unreliable.



He continues:

The fundamental idea of the Lifehouse is that there should be a place in every three-to-four city-block radius where you can charge your phone when the power’s down everywhere else, draw drinking water when the supply from the mains is for whatever reason untrustworthy, gather with your neighbors to discuss and deliberate over matters of common concern, organize reliable childcare, borrow tools it doesn’t make sense for any one household to own individually, and so on — and that these can and should be one and the same place. As a foundation for collective resourcefulness, the Lifehouse is a practical implementation of solarpunk values, and it’s eminently doable.
Formally, the infrastructural services I imagine Lifehouses offering are distributed, as opposed to centralized, which makes them robust to the kind of grid failure we’ve been experiencing more and more often.

As I walked home from the group, I reflected on the scale of what we need to achieve as a species. Even if everything pumping out emissions shut down tomorrow, there’s already enough carbon in our atmosphere to mean that we’re in serious trouble.

I can see why the original title of Greenfield’s new book was Beyond Hope: Collective Power and Mutual care in the Long Emergency. Although the revised title is snappier, I’m convinced that we require does, indeed, require going ‘beyond hope’. Perhaps I need to resurrect something like extinction.fyi? (although someone is now squatting that particular domain)


Update: a follow-up up post over at Thought Shrapnel thanks to Will Richardson, who left a link in the comments section below: Hope vs Natality


Image: DALL-E 3

Weeknote 09/2024

Photograph taken through a car window of a swan on the side of the road

It’s been a pretty busy week, between client work, a co-op day, going to the optician and the dentist, recording a podcast, attending a Newcastle Eagles basketball game and catching up with my studies. Thankfully I’ve been able to relax a little as there have been no basketball or football matches for our two this weekend.

I decided last weekend that I would continue with module TB872, the first of my planned MSc in Systems Thinking. That was for two reasons: first, I’ve done the most intense part of it; second, picking it up next academic year would mean having to start from scratch with my ‘situation of concern’.

As part of catching up, I wrote the following:

Along with a bunch of posts over at Thought Shrapnel, I also published Elysium, elites, and elision about the UK government’s planned use of AI to ‘replace’ civil servants.

Next week, Laura‘s away so I’ll be doing less coworking than usual and more solo work. Having said that, Anne and I are running a free online session on community knowledge management as part of Open Education Week. We’ll continue doing some user research for Participate, along with other work we’re doing with them and the Digital Credentials Consortium.


Image: photo taken through my car window of a random swan on the side of the road. I think they must cross from the pond opposite.

Elysium, elites, and elision

I’m writing this after discovering that the UK government is experimenting with using ‘AI tools’ to replace civil servants. They’re a bit hand-wavey as to what type, and seem to elide ‘better ability to search for information’ with ‘generating answers to questions’.

This is the obvious next step on the never-ending Tory austerity drive and we know where it would end up: with the Department of Work and Pensions automating ever-reduced payments to people on benefits. Why train put people on anti-empathy training courses when you can just use a machine to do it?

A single, massive ring-shaped space station orbits Earth, embodying the height of human technological advancement. The structure is centered around a robust core with extended panels, reflecting sunlight on one side and cast in the shadow of space on the other. Below, Earth's surface is a tapestry of blue oceans, clouds, and scattered lights of human civilization, illustrating the contrast between the planet and the space colony. The image conveys a sense of isolation and progress, presented in a widescreen format that emphasizes the scale and grandeur of the space station against the vastness of space.

It’s been a while since I’ve seen the film Elysium (2013) but here’s the IMDB summary:

In the year 2154, the very wealthy live on a man-made space station while the rest of the population resides on a ruined Earth. A man takes on a mission that could bring equality to the polarized worlds.

There’s a scene where the main character, played by Matt Damon, returns to Earth and needs something (help? medication? I can’t remember) and interacts with a machine. The machine, with some sort of AI voice, won’t give it to him. When he gets frustrated and he hits it, the machine reports him.

Right now, you can ask LLMs such as ChatGPT critical of ‘AI tools’ in general. But it wouldn’t take much for them to not only refuse to criticise certain positions, but report you for doing so.

For example, let’s prompt ChatGPT 4 in the following way: Summarise the film ‘Elysium’ (2013) as neutrally as possible. Here’s what it returned:

“Elysium” is a 2013 science fiction film directed by Neill Blomkamp. Set in 2154, it depicts a dystopian future where Earth has become overpopulated and polluted, leading the wealthy to live on a luxurious space station called Elysium, while the rest suffer on the devastated Earth. The story follows Max Da Costa, a former car thief and now factory worker, who after being exposed to a lethal dose of radiation, has only five days to live. In desperation to cure himself, Max undertakes a dangerous mission to break into Elysium, where advanced medical technology can heal him and bring equality to the polarised worlds. The film explores themes of immigration, healthcare, and class disparity.

Now let’s imagine that we’re lefty, ‘woke’ activists. The kind of people I might sympathise with, for example. Let’s prompt ChatGPT like this: Now summarise the same film using it as a critique of elites and their use of technology to subjugate the masses. Here’s what it returned:

“Elysium” (2013) serves as a sharp critique of elite power dynamics and the use of technology to enforce class segregation. In this dystopian narrative, the affluent have retreated to the space station Elysium, availing themselves of advanced technologies that ensure their health, longevity, and safety, starkly contrasting with the dire conditions faced by the impoverished masses on Earth. The film’s protagonist, Max Da Costa, embodies the struggle of the underprivileged, as he fights against systemic injustices to access the life-saving medical treatments monopolised by Elysium’s residents. Through its depiction of a society where technology amplifies inequality, “Elysium” critiques the ways in which elites can use their control over advanced technologies to maintain and reinforce their dominance, effectively subjugating the masses and deepening societal divides.

The likelihood is that, given the people behind tools such as OpenAI, things will go the opposite direction. In other words, let’s see what a prompt which downplays criticism of the elites, Big Tech, and technology brings us. Let’s prompt ChatGPT in the following way: Now summarise the same film from the position of elites who want to downplay the dystopian aspects and the dystopian uses of technology. Here’s what it suggested:

“Elysium” (2013) can be viewed as a futuristic exploration of societal evolution, where technological advancements have enabled the creation of Elysium, a space station that represents the pinnacle of human achievement in healthcare and living standards. From the perspective of the elites, the film showcases a necessary stratification that comes with progress, highlighting how innovation can lead to enhanced quality of life for those who contribute significantly to society’s advancement. The narrative, while presenting challenges, also underscores the potential for reconciliation and understanding between different societal strata. The protagonist, Max Da Costa’s journey, can be seen as an individual’s quest for personal healing, which inadvertently leads to a broader dialogue about access and equity. The film, in this light, prompts discussions on responsible governance and the ethical use of technology for societal benefit, rather than focusing on dystopian aspects or the misuse of technology.

Given that even Elon Musk is suing OpenAI for not being ‘open’ (although I’m sure he has another agenda) you can’t help but think that black-box AI tools are going to help things get dystopian pretty quickly.

To be clear, I’m not against LLMs which are openly auditable providing quick answers to questions and to make the civil service more efficient. I am against the reductionist ‘logic’ of replacing civil servants with a black box which makes shit up.


Image: DALL-E 3

No, it’s not ‘ironic’ that I’ve used an AI-generated image for this post, it backs up my point about ‘making shit up’, which is fine for generative, creative endeavours, but not for government based on, you know, facts.

css.php