Open Thinkering

Menu

Tag: complexity

TB872: Systemic inquiry as a social technology

Note: this is a post reflecting on one of the modules of my MSc in Systems Thinking in Practice. You can see all of the related posts in this category.


A large arrow extending from left to right, symbolising a workflow or process. Within the confines of this arrow, a sparse collection of 3D boxes representing projects, including rectangular prisms and cylinders, are arranged to reflect different project types and stages. Small curved arrows, indicating the interconnections or links between these projects, connect the boxes in a seemingly random yet structured pattern, emphasizing the interconnected nature of project management and workflow progression.

If projects are so problematic, and we need more emotion in our decision-making, then what should we do instead? This post focuses on Chapter 10 of Ray Ison’s book Systems Practice: How to Act, which begins with a list of the kinds of things people who want to use an alternative approach need to be able to do.

Ison can be wordy, so I’ve asked ChatGPT for a more straightforward version:

  1. Comprehending the current and historical context of situations.
  2. Recognising and valuing the diverse viewpoints of multiple stakeholders.
  3. Clearly identifying and exploring the underlying purpose of actions or decisions.
  4. Differentiating between the ‘what’, ‘how’, and ‘why’, and determining the appropriate timing for each aspect.
  5. Implementing actions that are purpose-driven, systemically beneficial, culturally viable, and ethically justifiable.
  6. Creating a method to harmonise understanding and practices across different locations and over time, especially in situations where initial improvements are unclear, thus managing a dynamic, co-evolutionary process adaptively.
  7. Sustainably integrating the approach into ongoing practices without oversimplifying or misusing its fundamental principles.

Instead of setting this approach against projects, it’s more of a “meta-form of purposeful action” which provides a “more conducive, systemic setting for programmes and projects”. (See the arrow image above to get the gist.)

We understand systemic inquiry as a meta-platform or process for ‘project or program managing’ as well as a particular means of facilitating movement towards social learning (understood as concerted action by multiple stakeholders in situations of complexity and uncertainty). When conducted with others it can be called systemic co-inquiry.

Ison, R. (2017) Systems practice: how to act. London: Springer. pp.252-253. Available at: https://doi.org/10.1007/978-1-4471-7351-9.

Just because the systemic inquiry is ‘meta’ does not mean that it is necessarily bigger or longer lasting than the programmes and/or projects it contains. Nor is the ‘goal’ of systemic inquiry to create ‘a system’; it is an action-oriented approach where the intention is to produce a change.

The image below, Fig. 10.1 in Ison’s book (p.256), is an activity model of a system to conduct a systemic inquiry. It has been adapted from Peter Checkland’s work.

An activity model of a system to conduct a systemic inquiry, depicted as a series of nested loops in a flow diagram. Starting from the top, the process begins with 'set up structured exploration of situation considered problematical.' The next step is 'make sense of situation by exploring context (culture, politics) using systems models as devices.' Following this, 'tease out possible accommodations between different interests' leads to 'define possible actions to change; that are systemically desirable and culturally feasible.' The final step within the main loop is 'take action to change - creating a new situation.' This leads to a smaller loop consisting of 'monitor,' then 'take control action,' and finally 'define criteria: efficacy, efficiency, effectiveness.' Arrows between each step indicate the flow and sequence of activities within the systemic inquiry process.

If this approach creates a ‘social learning’ then this is a ‘learning system’. But what does that mean? Ison suggests that instead of thinking about it in ontological terms (e.g. “a course or a policy to reduce carbon emissions”) we should think of a learning system as an epistemic device (i.e. “a way of knowing and doing”).

This move constitutes a ‘design turn’, says Ison, away from first-order inquiry (e.g. drawing a boundary to determine what is in/out of scope) to a second-order understanding (e.g. the learning system as existing after its enactment, through human relationships). Both are necessary, it’s just a question of different levels of abstraction and “critical reflexivity”.

Although Ison doesn’t talk about it this way, I guess this is the practitioner (P) reflecting on their own place within a system, making it P(PFMS). See the diagram at the top of this post. When intervening, as an educator, policy maker, or consultant, therefore, there’s a difference between triggering a first-order response (e.g. creating a course or an ‘intervention’) versus a second-order response (e.g. creating the circumstances for people to reflect on their context and take responsibility).


Top image: DALL-E 3 (based on the bottom part of Fig. 10.1 on p.252 of Ison’s book)

Solving for complexity

If there’s one thing I’m called upon to do time and again in my work, it’s to untangle complexity. The result of this is not simple but instead a distillation that nevertheless includes simplification and prioritisation of complex issues.

One way of doing this is to use a framing popularised by Donald Rumsfeld in a press briefing back in 2002:

There are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns—the ones we don’t know we don’t know.

Donald Rumsfeld

As the philosopher Slavoj Žižek pointed out a couple of years later, there are also ‘unknown knowns’, the things we’re unaware that we (or our community/organisation) know.

2x2 grid with iceberg in the centre and 'Known knowns', 'known unknowns' at the top, while 'unknown knowns' and 'unknown unknowns' are at the bottom

Let’s use the example of software vulnerabilities. There are those that your team knows about and needs to fix. These are your known knowns. In addition, there are vulnerabilities that you are aware must exist (it’s software!) but you don’t known about them yet. These are your known unknowns.


Both known knowns and known unknowns can be thought of as being the ‘tip of the iceberg’, the bit that you can see and understand. Beneath the surface, however, lurk things of which you’re unaware. These take concerted effort to get to.

Continuing the software analogy, there are attack vectors that are known within the communities and networks of which your team and organisation are part. There is a latent knowledge, some unknown knowns that need surfacing in order to become known knowns.

Beyond that, looking at the bottom-right of the grid, there are unknown unknowns, perhaps new (or rediscovered) techniques which could be used, but need much research and synthesis to discover.


Most organisations I work with are aware of the top of the iceberg. They know what they know, and they are aware of the things that perhaps they don’t know. What they need help with are the things that are beyond that.

So how do we prioritise this work around the unknown when everyone’s busy with their day job? The answer is to include all of it in strategic planning, to explicitly move from unknown unknowns through to known knowns in a systematic way.

This can sound quite abstract so let’s once again use a more tangible example. Let’s say that you’re an NGO and trying to make a difference in the area of climate change.

👍 Known knowns — these are things that the NGO knows make a difference either in terms of accelerating or slowing down the rate of climate change. This is the core of their activism and campaigning.

🤔 Known unknowns — these are the areas that the NGO is unsure about and perhaps needs to do some more research. They know how to do this, and so just need to raise money to do the research and move this towards being a known known.

🕸️ Unknown knowns — these are the things that are known by the NGO as a whole, or by the network or community around it. New staff members might not be aware of this latent knowledge, so the best thing an organisation can do to surface these is documentation. Wikis are particularly useful in this regard.

🕵️ Unknown unknowns — these are things outside of the knowledge, experience, and understanding of the NGO and the network and community around it. Historically, these have often been technological, so for example solutions to problems, or new problems that may be caused by inventions/developments.

This last area, unknown unknowns is the reason that organisations need to employ generalists as well as specialists: people who are interested in lots of things as well as people who spend their time on just one thing.

Isaiah Berlin’s essay The Fox and the Hedgehog is a useful way of unpicking the difference between the two. In a nutshell, hedgehogs are those who try and fit everything into one unifying view of the world, whereas foxes are happy to know many different kinds of things in many different ways. Every organisation needs ‘foxes’ that are aligned with the mission, giving them time to explore and discover things from unusual places that might be beneficial in surfacing unknown unknowns.


None of the above is easy, and it can feel like it’s outside of the scope of the everyday running of an organisation. At this point in the conversation with clients, friends, and anyone who will listen, I refer to Charles Hand’s sigmoid curve, which is shown below.

Sigmoid curve showing point A (where the intervention should be made) and point B (where the initiative or organisation is already in decline)

Without effective horizon-scanning for both unknown knowns and unknown unknowns, organisations wait until it’s too late (point B) to make changes which will put them back on the path of growth.

As shown by the diagram, it is at point A, during a period of growth, that new knowledge, experience, and understanding should be added to the mix. This allows for the next cycle of growth to happen, but also means a period of time where there is uncertainty and doubt. This is indicated by the shaded area.


To conclude, this kind of work can seem quite disconnected from the core business of organisations. When we’re busy trying to make better widgets, solve world hunger, or sell more stuff, this kind of work feels like a ‘nice to have’ rather than core to organisational success.

However, I would say that the opposite is true: anyone can create an organisation that can secure some funding and last a few years. The history of both Silicon Valley and your local high street is testament to that. What organisations that have been around a while know, however, is that it’s precisely this kind of work that leads to long-term growth and sustainbility.


Need some of this? You can hire me! Get in touch


This post is Day 85 of my #100DaysToOffload challenge. Want to get involved? Find out more at 100daystooffload.com.

Digital myths, digital pedagogy, and complexity

I’m currently doing some research with Sarah Horrocks from London CLC for their parent organisation, the Education Development Trust. As part of this work, I’m looking at all kinds of things related to technology-enhanced teacher professional development.

Happily, it’s given me an excuse to go through some of the work that Prof. Steve Higgins, my former thesis supervisor at Durham University, has published since I graduated from my Ed.D. in 2012. There’s some of his work in particular that really resonated with me and I wanted to share in a way that I could easily reference in future.


In a presentation to the British Council in 2013 entitled Technology trends for language teaching: looking back and to the future, Higgins presents six ‘myths’ relating to digital technologies and educational institutions:

  1. The ‘Future Facing’ Fallacy – “New technologies are being developed all the time, the past history of the impact of technology is irrelevant to what we have now or will be available tomorrow.
  2. The ‘Different Learners’ Myth – “Today’s children are digital natives and the ‘net generation –they learn differently from older people”.
  3. A Confusion of ‘Information’and ‘Knowledge’ – “Learning has changed now we have access to knowledge through the internet, today’s children don’t need to know stuff, they just need to know where to find it.”
  4. The ‘Motivation Mistake’ – “Students are motivated by technology so they must learn better when they use it.”
  5. The ‘Mount Everest’ Fallacy – “We must use technology because it is there!”
  6. The ‘More is Better’ Mythology – “If some technology is a good thing, then more must be better.

The insightful part, is I think, when Higgins applies Rogers’ (1995) work around the diffusion of innovations:

  • Innovators & early adopters choose digital technology to do something differently – as a solution to a problem.
  • When adopted by the majority, focus is on the technology, but not as a solution.
  • The laggards use the technology to replicate what they were already doing without ICT

In a 2014 presentation to The Future of Learning, Knowledge and Skills (TULOS) entitled Technology and learning: from the past to the future, Higgins expands on this:

It is rare for further studies to be conducted once a technology has become fully embedded in educational settings as interest tends to focus on the new and emerging, so the question of overall impact remains elusive.

If this is the situation, there may, of course, be different explanations. We know, for example, that it is difficult to scale-up innovation without a dilution of effect with expansion (Cronbach et al. 1980; Raudenbush, 2008). It may also be that early adopters (Rogers, 2003; Chan et al. 2006) tend to be tackling particular pedagogical issues in the early stages, but then the focus shifts to the adoption of the particular technology, without it being chosen as a solution to a specific teaching and learning issue (Rogers’‘early’ and ‘late majority’). At this point the technology may be the same, but the pedagogical aims and intentions are different, and this may explain a reduction in effectiveness.

The focus should be on pedagogy, not technology:

Overall, I think designing for effective use of digital technologies is complex. It is not just a case of trying a new piece of technology out and seeing what happens. We need to build on what is already know about effective teaching and learning… We also need to think about what the technology can do better than what already happens in schools. It is not as though there is a wealth of spare time for teachers and learners at any stage of education. In practice the introduction of technology will replace something that is already there for all kinds of reasons, the technology supported activity will squeeze some thing out of the existing ecology, so we should have good grounds for thinking that a new approach will be educationally better than what has gone before or we should design activities for situations where teachers and learners believe improvement is needed. Tackling such challenges will mean that technology will provide a solution to a problem and not just appear as an answer to a question that perhaps no-one has asked.

My gloss on this is that everything is ambiguous, and that attempts to completely remove this ambiguity and/or abstract away from a particular context are doomed to failure.

One approach that Higgins introduces in a presentation (no date), entitled SynergyNet: Exploring the potential of a multi-touch classroom for teaching and learning, is CSCL. I don’t think I’d heard of this before:

Computer-supported collaborative learning (CSCL) is a pedagogical approach where in learning takes place via social interaction using a computer or through the Internet. This kind of learning is characterized by the sharing and construction of knowledge among participants using technology as their primary means of communication or as a common resource. CSCL can be implemented in online and classroom learning environments and can take place synchronously or asynchronously. (Wikipedia)

The particular image that grabbed me from Higgins’ presentation was this one:

CSCL

This reminds me of the TPACK approach, but more focused on the kind of work that I do from home most weeks:

One of the most common approaches to CSCL is collaborative writing. Though the final product can be anything from a research paper, a Wikipedia entry, or a short story, the process of planning and writing together encourages students to express their ideas and develop a group understanding of the subject matter. Tools like blogs, interactive whiteboards, and custom spaces that combine free writing with communication tools can be used to share work, form ideas, and write synchronously. (Wikipedia)

CSCL activities seem like exactly the kind of things we should be encouraging to prepare both teachers and young people for the future:

Technology-mediated discourse refers to debates, discussions, and other social learning techniques involving the examination of a theme using technology. For example, wikis are a way to encourage discussion among learners, but other common tools include mind maps, survey systems, and simple message boards. Like collaborative writing, technology-mediated discourse allows participants that may be separated by time and distance to engage in conversations and build knowledge together. (Wikipedia)

Going through Higgins’ work reminds me how much I miss doing this kind of research!


Note: I wrote an academic paper with Steve Higgins that was peer-reviewed via my social network rather than in a journal. It’s published on my website and Digital literacy, digital natives, and the continuum of ambiguity. I’ve also got a (very) occasional blog where I discuss this kind of stuff at ambiguiti.es.


Photo by Daniel von Appen

css.php