Open Thinkering

Menu

Tag: ambiguity

TB871: Ambiguity and cognitive biases

Note: this is a post reflecting on one of the modules of my MSc in Systems Thinking in Practice. You can see all of the related posts in this category


You may have watched this video which was referenced by the brilliant Cathy Davidson in her book Now You See It. But perhaps you haven’t seen, as I hadn’t, a related one called The Door Study. TL;DR: it was one of the first confirmations outside of a laboratory setting of ‘change blindness’. What we perceive is usually what we’ve been primed to pay attention to, rather than simply us sensing data from our environment.

It’s a good reminder that, phenomenologically-speaking, much of what we experience about the external world, such as colour, doesn’t actually ‘exist’ in an objectively-meaningful way. We construct our observation of the environment; everything is a projection. As the module materials quote Heinz von Foerster as saying: “Objectivity is the delusion that observations could be made without an observer” (The Open University, 2020a).

When it comes to systems thinking, this is a key reason why the idea of ‘perspective’ is so important: the world is different depending on our point of view, and we can struggle to see it as being even possible to observe it differently. A good example of this, other than the familiar rabbit-duck illusion is the Necker cube (The Open University, 2020b):

A series of lines that look like a 3D cube

This is literally a flat pattern of 12 lines, so the fact that most of us see it automatically as a 3D object is due to our brains modelling it as such. However, our brains aren’t sure whether we should see the underside of the bottom face of the cube, or the upper side of the top face. As with the rabbit-duck illusion, our brains can see one of these, but not both at the same time.

Going further, we can take the ink blot below, taken from the module materials (The Open University, 2020c):

Using this diagnostically is usually referred to as a Rorschach test which many people think is pseudo-scientific. However, it is interesting as a parlour trick to show how people think about the world. For example, I see two people with their hands behind their backs, kissing. I’m not sure what that says about me, if anything!

Spending a bit more time with the ink blot, I began to see it as a kind of eye covering that one might wear at a masked ball. Switching between these two is relatively easy, as the primary focus for each is on a different part of the image.

There are similar approaches to the above, for example the Thematic Apperception Test (TAT) created by Henry A. Murray and Christiana D. Morgan at Harvard University, which presents subjects with an ambiguous image (e.g. a photograph) and asks them to explain what is going on.

The rationale behind the technique is that people tend to interpret ambiguous situations in accordance with their own past experiences and current motivations, which may be conscious or unconscious. Murray reasoned that by asking people to tell a story about a picture, their defenses to the examiner would be lowered as they would not realize the sensitive personal information they were divulging by creating the story.

(Wikipedia)

These approaches show how much we construct our understanding of the world rather than just experience it as somehow objectively it is ‘out there’. There’s a wonderful image created by JM3 based on some synthesis by Buster Benson which I had on the wall of my old home office (and will do in my new one when it’s constructed!) which groups the various cognitive biases to which we humans are susceptible:

Diagram titled "Cognitive Bias Codex, 2016" showing different cognitive biases organized around a central brain image into four categories: "Too Much Information," "Need To Act Fast," "Not Enough Meaning," and "What Should We Remember?".

As you can see, these are boiled down to:

  • What should we remember?
  • Too much information
  • Not enough meaning
  • Need to act fast

Here are some of the most common ten biases we are prone to:

  1. Confirmation bias: Favouring information that confirms pre-existing beliefs while discounting contrary information, often seeking validation rather than refutation.
  2. Fundamental attribution error: Overemphasising personality-based explanations for others’ behaviours and underestimating situational influences, particularly noted in Western cultures.
  3. Bias blind spot: Believing oneself to be less biased than others, exemplifying a self-serving bias.
  4. Self-serving bias: Attributing successes to oneself and failures to external factors, motivated by the desire to maintain a positive self-image.
  5. Anchoring effect: Relying too heavily on the first piece of information encountered (the anchor) when making decisions, influencing both automatic and deliberate thinking.
  6. Representative heuristic: Estimating event likelihood based on how closely it matches an existing mental prototype, often leading to misjudgment of risks.
  7. Projection bias: Assuming others think and feel the same way as oneself, failing to recognize differing perspectives.
  8. Priming bias: Being influenced by recent information or experiences, leading to preconceived ideas and expectations.
  9. Affinity bias: Showing preference for people who are similar to oneself, often unconsciously and based on subtle cues.
  10. Belief bias: Letting personal beliefs influence the assessment of logical arguments, leading to biased evaluations based on the perceived truth of the conclusion.

Of course, just having these on one’s wall, or being able to name them, doesn’t make us any less likely to fall prey to them!

References

TB871: Metaphor, ambiguity, and conceptual blending

Note: this is a post reflecting on one of the modules of my MSc in Systems Thinking in Practice. You can see all of the related posts in this category


I’m managing to skip quite a few activities in this module because I’ve thought through the impact of metaphor and ambiguity before, in quite some depth. In fact, I’ve got a whole other blog on it. This post is prompted by the mention of ‘conceptual blending’ in the module materials:

Cognitive scientists Gilles Fauconnier and Mark Turner (2002) have written about what they call conceptual blending, which is the human mind’s general ability to match two or more different inputs – such as images, words, events, frames, identities or even embodied actions – and to selectively project elements from those different inputs and create a new, blended mental space that has its own structure that retains a connection to those original inputs. They share a range of examples where new mental spaces are produced, including life-stage rituals, sporting achievements and political commentary.

[…]

Significantly, the theory of conceptual blending argues that positions, such as ideas or arguments derived in the blended mental space, can have an effect on our thinking. Consequently, perceptions and judgements about situations involving any of the initial input spaces are modified. Metaphor seems to fit with this way of understanding the mind because it brings together two different notions into a single whole. The boss and dinosaur become an imaginary boss–dinosaur composite. Tutsi and cockroach become a single conceptual blended whole, which could then influence cognition and behaviour in relation to Tutsis or cockroaches.

(The Open University, 2020)
Two overlapping circles, on labelled 'connotative aspect' and one labelled 'denotative aspect'. There is an arrow pointing to the overlap.

Very briefly, then, when we yoke together two ideas we create a zeugma or syllepsis — for example ‘digital literacy’. Or more simply, if we look at prehistoric example, the idea of a “lion man”. Is the emphasis on the first of these (digital/lion) or on the second (literacy/man)? In other words, are we talking about literacy of the digital, or digital forms of literacy. Likewise, are we talking about a man who act like a lion, or a lion that resembles a man?

At the overlap of what something denotes and what it connotes is a space of ambiguity. This is where space is opened up for new ideas and creative/playful thinking. However, there are different types of ambiguity, which I’ve written about in length, including in my thesis, but which I’ll summarise here using this diagram:

Continuum of ambiguity ranging from Generative Ambiguity, through Creative Ambiguity, Productive Ambiguity, and 'Dead Metaphors#

Given that all communication is in some way ambiguous, what we’re trying to avoid are what Richard Rorty calls “dead metaphors”. These are terms which may have had some explanatory power but which have now devolved into cliche.

This is how disinformation works: it creates a space between things that definitely exist and puts them together in people’s minds in such a way that it creates connections that just aren’t there. Political slogans, marketing materials, and even the way that society in general refers to certain groups can be made more or less ambiguous. For change to happen, I’d argue, things need to be productively ambiguous.

References

TB871: Flamingos and hedgehog croquet

Note: this is a post reflecting on one of the modules of my MSc in Systems Thinking in Practice. You can see all of the related posts in this category


A wicked game of croquet? Lewis Carroll Alice’s Adventures in Wonderland is illustrated by Arthur Rackham and published by The Viking Press as part of their ‘Studio’ book.

I’m skipping some of the early activities in TB871 because I’ve already covered them in more depth in TB872. I was, however, quite taken by this metaphor:

Some of you may know about the wonderful game of croquet described in Lewis Carroll’s Alice’s Adventures in Wonderland, in which the balls were hedgehogs that unrolled and walked away and the croquet mallets were flamingos that craned their necks up to Alice rather than staying in the shape required to be a mallet. Human systems are a bit like that: people play by the rules while they want to, but in principle they are quite capable of unrolling and walking away – though there are usually strong incentives not to do so.

(The Open University, 2020)

I referenced Alice’s Adventures in Wonderland a while ago in an article about digital literacy and ambiguity I co-wrote with my thesis supervisor. In that case it was the Mad Hatter likening a raven to a writing-desk, but I like this one with the flamingos and hedgehogs just as much as it helps people understand how much the world is in flux.

References

css.php