Open Thinkering

Menu

Tag: ambiguity

Nobody knows what digital literacy is.

A request for information series

I’m currently in the latter stages of my Ed.D. thesis focusing on the concept of ‘digital literacy’. It’s been a long haul – 6 years (spanning 4 jobs, 2 supervisors, and the birth of 2 children) working part-time in a quickly-moving digital world and, to be honest, I’m rather glad it’s coming to an end.

One of the reasons I’m glad that I’ll finish my doctoral thesis this year is that it’s clear just how much we need some alignment and operationalisation around the term ‘digital literacy’ rather than the endless squabbles, petty niggling and swamping of agendas by large organizations. I outlined these problems in 2009 and, unfortunately, they haven’t improved any. The fact that we’re still debating what is meant by the traditional term ‘literacy’ says a lot about how far we’re able to get on with operationalising notions of ‘digital literacy’ in the current climate. I’ll be explaining my notion notion of a ‘trajectory of ambiguity’ in an upcoming journal article: discussions of ‘digital literacy’, I believe, have become mired in endless debates half-way through this trajectory.

During my studies I’ve read countless reports and watched a myriad of presentations claiming (or at least assuming) some kind of authority when explaining what constitutes digital literacy. Many of these elide at least two agendas – usually e-safety or media literacy – with almost all of them missing the main point: digital literacy isn’t the ‘aftermath’ of literacy at all.

We don’t need to be told what digital literacy is, we need to discuss, build consensus, start aligning around a reasonable definition. Granted, there might be a difference in emphasis here and there, but only through such alignment will we able to start operationalising the concept of ‘digital literacy’ and use it for the benefit of learners.

And ulimately, after all the academic churning and grandstanding, isn’t that what it’s all about?

Image CC BY-NC Pulpulox !!!

Creative Ambiguity and Digital Literacy

I’m (re-)writing my first journal article at the moment, ostensibly in order to make my viva easier when I’ve finished my Ed.D. thesis. It’s easier to prove an ‘original contribution to knowledge’ when some of it has been published in a peer-reviewed journal! You’ll understand, therefore, why this post, which constitutes the first part of the article, is Copyright (All Rights Reserved).

All human communication is predicated upon vocabularies. These can be physical in the form of sign language but, more usually, are oral in nature. Languages, therefore, are codified ways in which a community communicates. However, such languages are not static but evolve over time to meet both changing environmental needs and to explain and deal with the mediation and interaction provided by tools.

As Wittgenstein argued, a private language is impossible as the very purpose of it is communication with others. Those with whom one is communicating must have the ‘key to open the ‘box’. Yet if all language is essentially public in nature it begs the question as to how popular terms can be used in such a variety and multiplicity of ways. Terms, phrases and ways of speaking have overlapping lifecycles used by various communities at particular times. A way of describing a concept often enters a community as a new and exciting way of looking at a problem, perhaps as a meme. Meanwhile, or soon after, the same concept might be rejected by another community as out of date, as ‘clunky’ and lacking descriptive power.

Thomas Kuhn’s The Structure of Scientific Revolutions provides some insight into this process. Kuhn identified periods of ‘normal’ science in a given field which would be followed by periods of ‘revolutionary’ science. The idea is that a community works within a particular paradigm (‘normal’ science) until the anomalies it generates lead to a crisis. A period of ‘revolutionary’ science follows in which competing paradigms that can better explain the phenomena are then explored. Some are accepted and some are rejected. Once a paradigm gains general acceptance then a new period of ‘normal’ science can begin and the whole process is repeated. Kuhn’s theory works in science because there are hard-and-fast phenomena to be explored; theories and concepts can be proved or disproved according to Popper’s falsifiability criterion.

The same is not necessarily true in the social sciences, however: it can be unclear what would constitute a falsification of certain widely-held concepts and theories. Indeed it is often the case that they gain or lose traction by the status of the people advocating them rather than the applicability and ‘fit’ of the concept. In addition, a concept or theory may serve a purpose at an initial particular point in time but this utility may diminish over time. Unfortunately, it is during this period of diminishing explanatory power that terms are often evangelised and defined more narrowly. This should lead to a period of ‘revolutionary’ social science but this is not necessarily always the case. If, for example, a late-adopting group holds political power or controls funding streams, even those in groups who have rejected the concept may continue to use it.

An example of this process would be the coining of the terms ‘digital natives’ and ‘digital immigrants’ in 2001 by Marc Prensky. This led to a great deal of discussion, both online and offline, in technology circles, education establishments and the media. Debates began about the maximum age of a ‘digital native’, what kind of skills a ‘digital native’ possessed, and even whether the term ‘digital immigrant’ was derogatory. As the term gained currency and was fed into wider and wider community circles, the term became more narrowly defined. A ‘digital native’ was said to be a person born after 1980, someone who was ‘digitally literate’, and who wouldn’t even think of of prefixing the word ‘digital’ to the word ‘camera’.

It is our belief that the explanatory power of a concept, theory or term in the social science comes, at least in part, through its ‘creative ambiguity’. This is the ability of the term – for example, ‘digital native’ – to express a nebulous concept or theory as a kind of shorthand. The amount of ambiguity is in tension with the explanatory power of the term, with the resulting creative space reducing in size as the term is more narrowly defined. Creative spaces can also bring people together from various disciplines, allowing them to use a common term to discuss a concept from various angles.

The literal meaning of a term is the denotative element and includes surface definitions of a term. For ‘digital literacy’ this would be to simply equate the term with literacy in a digital space. The implied meaning, on the other hand, is the connotative element and deals with the implied meaning of a term. With digital literacy this would involve thoughts and discussion around what literacy and digitality have in common and where they diverge. The creative space is the ambiguous overlap between the denotative and connotative elements:

Such creative ambiguities are valuable as, instead of endless dry academic definitions, they allow for discussion and reflection, often leading to changes in practice. In order to maximise the likelihood and impact of a creative space it is important that a term not be too narrowly defined, for what it gains in ‘clarity’ it loses in ‘creative ambiguity’. There is a balance to be struck.

Terms and phrases, however, can be ambiguous in a number of ways. Some of these types of ambiguity allow for creative spaces between the denotative and connotative elements of a new term to a greater or lesser degree. In other words, they involve greater or smaller amounts of ambiguity.

References

Prensky, M. (2001) Digital Natives, Digital Immigrants (On The Horizon, 9(5), available online at http://dajb.eu/fpIs05, accessed 14 December 2010)

The rest of the journal article deals with Empson’s 7 types of ambiguity as related to the above. You may want to check out the posts I’ve written previously relating to creative ambiguity. I’d welcome your comments!

You don’t ‘build’ better teachers.

Teachers are not robots. You can’t add new modules, reprogram them, or expect them to work regardless of context. These seem to be facts completely alien to Elizabeth Green, writing in an article for the New York Times which appeared in March 2010. It genuinely surprised me that she’d actually set foot in a classroom, never mind being a ‘fellow of education’ at Columbia University Graduate School of Journalism. Whatever that means.

It’s far from a logically-structured article. But an article doesn’t have to be logical to be dangerous – the Daily Mail is proof of that. To summarise, Green seems to be advocating, through a clumsy juxtaposition of quotations and roundabout argumentation that:

  1. Teaching is a science that can be taught.
  2. We need ‘better’ teachers (and the only way to measure this is through student test scores)
  3. Doug Lemov is awesome because he published a book highlighting basic teaching techniques.
  4. Money is probably the most important factor in recruiting better teachers.
  5. Classroom management and specialist knowledge are key to teaching.

Number five is obvious and the other four are obviously wrong: teaching is more art than science, teaching and learning are about much more than examinations, Lemov is just another author, and no-one goes into (nor would go into) teaching for the money.

Simply writing a misguided article isn’t dangerous. It’s dangerous when the author confuses and conflates several different issues to create an ambiguity in the sixth way defined by William Empson:

An ambiguity of the sixth type occurs when a statement says nothing, by tautology, by contradiction, or by irrelevant statements; so that the reader is forced to invent statements of his own and they are liable to conflict with one another. (Seven Types of Ambiguity, p.176)

By neglecting to state explicitly what makes a ‘good’ teacher, Green fosters an ambiguity that, by the end of the article, she seemingly wants you to resolve by believing in the following howlers:

  • She criticises “proponents of No Child Left Behind” for seeing “standardised testing as the solution” but later quotes with approval findings that show “the top 5 percent of teachers” being able to “impart a year and a half’s worth of learning to students in one year, as judged by standardised tests.” (my emphasis)
  • By constructing a narrative (through the juxtaposition of third-party quotations) the article seems to show that paying teachers more leads to an improved ‘calibre’ of teacher. Measured by? “Standardised test scores”. These quotations, it becomes evident by the end of the article, merely mask the author’s opinion.
  • Green snipes at constructivism, “a theory of learning that emphasises the importance of students’ taking ownership of their own work above all else”. No it doesn’t. Do your homework.
  • She assumes that there is one way to be a ‘good’ teacher, that this is unchanging, and that it is independent of context. Quoting with approval Lemov’s assertion that classroom management is as “learnable as playing a guitar”, Green turns on the hyperbole (in what quickly turns into a puff-piece for Lemov and his book) with phrases such as “he pointed to the screen, their eyes raced after his finger.”

Usually I would ignore this as just another article written by another just another American in just another country. However, it would seem that the even-more-dangerous Michael Gove, a man against whom I tactically voted, is determined to bring the education system in the UK to its knees by a slavish aping of the worst parts of the American education system.

I despair.

css.php