Open Thinkering

Menu

Tag: AI

Exponentially more bad ideas in the world

This is a half-formed thought, and one I may come back to.


If you’re a regular person and you wake up and have a cool-but-not-very-sensible idea, then you might tell a couple of people about it. Nothing happens; your idea dies a short and noble death. If you’re a billionaire and have a cool-but-not-very-sensible idea, then you can fund and staff programmes to bring it to life. I don’t really need to point to examples, but I’ll gesture in the general direction of most things the Gates Foundation have done in education.

I think that might change with AI, and specifically AI agents, which are defined in the following way in a recent MIT Technology Review article:

The grand vision for AI agents is a system that can execute a vast range of tasks, much like a human assistant. In the future, it could help you book your vacation, but it will also remember if you prefer swanky hotels, so it will only suggest hotels that have four stars or more and then go ahead and book the one you pick from the range of options it offers you. It will then also suggest flights that work best with your calendar, and plan the itinerary for your trip according to your preferences. It could make a list of things to pack based on that plan and the weather forecast. It might even send your itinerary to any friends it knows live in your destination and invite them along. In the workplace, it  could analyze your to-do list and execute tasks from it, such as sending calendar invites, memos, or emails. 

People are already wringing their hands about the ‘AI slurry’ taking over the web, but what about when we go up a couple of notches from content? What happens when our misguided, or even actively dangerous, ideas can be acted upon by AI? I’m actually thinking less Universal Paperclips than AI as a kind of Rumplestiltskin or Midas character.

More soon.

AI for boring project tasks

Yesterday, WAO ran a pre-mortem for a new project we’re kicking off. We used Whimsical, but wanted the results to be in a spreadsheet for easy reference. This is the kind of thing that used to take probably an hour of my life and was a boring task. LLMs like GPT-4o make it easy:

Screenshot of Whimsical board with the instructions:

We ran a pre-mortem activity and I've attached the output. Working step-by-step, I'd like you to:

1. List all of the risks (yellow sticky notes), grouping them by theme (blue stickies). The sticky notes are grouped horizontally.
2. List all of the preventative measures (orange stickies) for each of the risks
3. List all of the mitigating actions (green stickies) for each of the risks
4. Create a table that I can copy into a Google spreadsheet that has the following columns:
- Theme
- Risk
- Preventative measures
- Mitigating actions

A few minutes later, I had this:

As Ethan Mollick says in his book Co-Intelligence: Living and Working with AI, it’s worth experimenting with AI in almost every corner of your life. Being able to outsource boring stuff, and using it as a thought partner for things you may have missed can be transformative.

For example, I asked a follow-up question as part of this conversation for things that we might have missed. It surfaced things around cultural (mis)understanding, data security, and policy changes, among other things.

At the moment, the main point of friction for me, whichever LLM I seem to use, is that it forgets context. Sometimes, that can be even within the same chat — and even sometimes when I’ve created a custom GPT. I haven’t used Amazon Titan, nor have I done much with Google Gemini, so I should explore those further.

Ideally, what I’d like is for an AI assistant to conversationally implement workflows that we’ve agreed upon in kick-off meetings. That may also involve my AI assistant talking to a client’s one for scheduling, updates on progress, etc. I think it would be a huge improvement to the hodge-podge of systems involved in multi-organisation projects.

Painting over problems with AI in the third sector

A breeze block wall being painted over. An image of a door is being painted onto the wall as well.

Attending professional events can often reveal the wildly different mental models that underpin the work that we do. This was particularly evident to me at an event I attended yesterday, where the speakers’ worldviews seemed to differ significantly from my own. It’s a reminder that even within sectors where we assume shared values, such as the third sector, we can understand and interpret the world in vastly different ways.

For instance, the speakers at this event clearly demonstrated that you can work in the third sector and still uphold capitalist values. For example, focusing on ‘closing the gap’ without questioning why, in fact, the gap exists in the first place. To my mind, it’s not enough to merely address disparities; we should be challenging the structures that create these disparities.

Advocacy and activism should be integral to third sector work, pushing for systemic change rather than just mitigating symptoms. Yet much of what I heard was a ‘hope’ that people won’t be left behind in an inevitable AI-driven future, without a critical examination of how this future is shaped and who it benefits.

I also encountered some confusing references in passing to ‘AI literacy’. This term was used in a way that often lacked clarity and coherence. In my thesis, I argued that new literacies are not thresholds to reach but conditions to attain. AI literacy should be treated no differently from other digital literacies, requiring deliberate practice and an understanding of underlying mechanisms. It’s about encouraging and developing ‘habits of mind’ that allow individuals to navigate and critically engage with AI technologies.

We’ve been exploring definitions at ailiteracy.fyi, and I’m convinced that, as with other forms of literacy, definitions are a power move, with individuals and organisations seeking to dictate what does or does not constitute ‘literate practice’. AI literacy is one of many digital literacies involving not only technical skills but also an understanding of the ethical, societal, and economic implications of AI. Feel free to read about the eight elements which underpin this here.

Going back to third sector organisations and AI, the rush to adopt this particular technology seems to be mainly focused on increasing service efficiency due to limited budgets and challenges around funding. This is necessary due to a lack of funding, which is a symptom of our capitalist system, with its ‘winners’ and ‘losers’, inevitably leaving whole sections of the population behind.

Organisations find themselves in a position where they must continuously do ‘more with less’, driving them to embrace technologies that promise efficiency without questioning the broader implications. This often leads to a superficial adoption of AI, focusing on immediate gains rather than long-term, sustainable, and equitable solutions.

We need to think differently. If we can’t adopt a holistic and inclusive perspective towards humanity, how can we expect to do so for our interdependent ecosystem? While AI has the potential to aid in climate mitigation and health improvements, we have to collectively adopt a new mental model to use it effectively. Otherwise, it’s going to be an accelerant for a somewhat-dystopian future; not because the technology itself is problematic, but because of the structures within which it is used.

This means rethinking our values and approaches, moving away from a mindset of competition and scarcity towards one of collaboration and abundance. It may sound utopian, but only then can we harness technology’s potential to create a more just and equitable world.


Image CC BY-ND Visual Thinkery. Bryan originally created this to illustrate the concept of ‘openwashing’ but I think it also works in relation to what I’m talking about here: people pretending that there’s anything other than a big wall between the haves and the have-nots in society.

css.php