In a move that will no doubt shock known world, I’ve decided that first-ever journal article will be both a collaborative venture and cock a snook towards traditional subject disciplines. Provisionally entitled Seven types of ambiguity and digital literacy I’m co-authoring it with my Ed.D. thesis supervisor Steve Higgins. Allegations that I’m doing so to prove originality in my research ahead of my viva voce by producing an article from an intended thesis chapter are, of course, completely unfounded.
I’m not going to give an overview of the entire article (for obvious reasons) although it will be published in an open-access journal. Suffice to say that we’re introducing the idea that terms such as digital literacy and digital natives/immigrants exhibit a ‘trajectory of ambiguity’ through which they pass on the way to becoming what Richard Rorty calls ‘dead metaphors’.
To prevent you having to go back and do Philosophy and Linguistics 101 I’ll remind you that the denotative aspect of a term is its surface or primary meaning. The connotative aspect of a term is its secondary, or implied, meaning. In the article, which features the overlapping diagram above (I’m not allowed to call it ‘Venn’, apparently) we’re arguing that there are three distinct phases through which terms pass. Whilst they never completely shed their connotative aspect the edge to the right of ‘Productive ambiguity’ is where the dictionary definition of terms reside. Generative ambiguity tends to be ‘blue skies thinking’, Creative ambiguity discussing and debating the definition of a term, and Productive ambiguity putting it into practice in various contexts.
You’ll be delighted to learn that we’ve done a sterling job in making the article itself ambiguous, situating it in the phase of Creative ambiguity. “Be the change you want to see,” “walk the walk,” etc.
As I mentioned last week in How not to write a thesis, mine isn’t the usual method by which you’d go about writing a doctoral thesis. Normally you’d read books like How to get a PhD and keep your findings to yourself until submission. I, on the other hand, have shared my findings at dougbelshaw.com/thesis since pretty much the beginning and have kept the structure fluid throughout.
Now that I’ve written more than half of it, and given that I’ve got my first Skype meeting with my thesis supervisor in months soon, now’s the time to pin down a title and structure. So here an attempt:
What is ‘digital literacy’? A Pragmatic investigation.
Problematising traditional (print) literacy
The history of ‘digital literacy’
The ambiguities of ‘digital literacy’
New Literacies: a solution?
But is it ‘literacy’?
Digitality as policy
In a sense, of course, I’ll never be finished with this thesis. There’ll just be a time at which it’s advisable to submit (or they pry it from my RSI-riddled hands…) :-p
Once a year, for a period of nine weeks, my wife appears to be a chronological year older than me. It was her 30th birthday on Thursday, for which I bought her 29 presents and took her to Jesmond Dene House Hotel for afternoon tea. I took yesterday off work as well and, in fact, with some organisation around mLearn 2010, am managing not to return to the office until Tuesday 25th!
Restructuring my thesis
Whilst my original target of submitting my Ed.D. thesis on 1st January 2011 (the earliest date I’m allowed) now looks less likely, I have written more than half of it now. High time, therefore, to be firming up a title and a structure. More on that over the weekend.
Fixing my Mac
I’ve had all manner of problems with my MacBook Pro recently. It’s a work machine and IT services at Northumbria University couldn’t sort it. Taking it to the Apple Store they recommended a reinstall over the top of the existing operating system. Seems to have done the trick (fingers crossed!)
I very rarely get shouty-shouty, stampy-stampy angry any more. I’m far too civilised and philosophical for that. On the other hand, if something was going to tip me over the edge it would be the Browne Review of Higher Education. For those under a rock in the UK or international readers, some of the recommendations:
Removal of cap on fees
Students since 1998 should pay ‘real’ interest fees on their student loans
Public money to be targeted at STEM, Business and MFL
I could write several essays on this, but I’ll have to be satisficed by observing that, overall, the recommendations would make it less likely that my offspring attend university, whilst my subjects (Philosophy, History, Education) would be marginalised. Oh, and that £16,000 loan I took out to pay for my tuition? That which the Student Loans Company reckon I’ve still got over £15,000 left after 8 years of repayments? That would be increased. I think they call that changing the contract after signing. Bar. Stewards. 🙁
I started my degree in Philosophy at the University of Sheffield in 1999, following it with an MA in Modern History at Durham University (2002-3). I stayed there to do a PGCE (2003-4) which was the first year of an MA in Education. I continued this part-time whilst teaching and then transferred to the Ed.D. programme. I’m currently (hopefully!) coming towards the end of writing my thesis on the concept of ‘digital literacy’. In total, then, I’ve been a student in Higher Education for 11 years.
Three of the most important things I’ve learned in the process?
1. Thinking, writing and editing are separate activities
There’s not point trying to think something through whilst in the midst of writing. Stop, go for a walk or just do something different (like the washing-up). Likewise, editing whilst writing is a frustrating activity. Separate these three activities to be more successful and productive in your academic writing.
2. Dont’ copy other people
Obviously don’t plagiarise other people’s work, but also don’t copy the way they go about doing things. Others engaged in research express shock that I don’t use the usual doctoral-level tools such as Endnote, etc. Whilst you should certainly learn from others, create (and continue to iterate) a system that works for you. I use a combination of a personal wiki, Google Scholar, Evernote, Dropbox, XMind and Scrivener. Have the confidence to go your own way.
3. Immersion is more important than chunking
Studying part-time is a whole lot harder than studying full-time, for obvious reasons. When studying part-time, instead of setting aside just one block of time per week it’s a a much better idea to have several shorter sessions. This keeps ideas in your mind and makes it more likely that your subconscious churns over and creates links between concepts!
I had an extremely productive Bank Holiday Monday, writing c.5,000 words of the Methodology section for my Ed.D. thesis. The following is an extract that explains where the philosophy of Pragmatism originated.
The essence of Pragmatism is that there exists no standpoint from which to judge the objective truth or falsity of a statement or belief:
There is no absolute standpoint, and there is no exemption from standpoints; there are only and always relative standpoints… I can in reality think of no absolute whatever; I always tacitly place myself upon the scene as the observer who is beholding things in their relation to himself. (Lovejoy, 1930:81, quoted in Mounce, 1997:159)
Instead of being able to distinguish between ‘primary’ and ‘secondary’ qualities in the world, therefore, we are left with only secondary qualities of which we can speak. The grass is not objectively green, it is only green to me. Pragmatism is a philosophy concerned with action and the practical application of meaning. It is concerned with the development of capacities and habits that allow for human beings to be successful and productive in the world. As we shall see, Pragmatist philosophers have little patience with definitions for their own sake.
As William James explained through the title and content of Pragmatism: A New Name for Some Old Ways of Thinking, there is little ‘new’ in the philosophy of Pragmatism other than its name. Indeed, although Peirce coined the term ‘Pragmatism’ – later switching to ‘Pragmaticism’, “a term “ugly enough to be safe from kidnappers” (Collected Papers, 5.414) – the ideas it represented have older origins and wider usage. Ralph Waldo Emerson, for example, demonstrated his adherence to a proto-Pragmatist project, stating:
Our life is an apprenticeship to the truth that around every circle another can be drawn; that there is no end in nature, but every end is a beginning; that there is always another dawn risen on mid-noon, and under every deep a lower deep opens. (Emerson, R.W., ‘Circles’ in Goodman, R.B., 1995:25)
And later in the same essay:
Step by step we scale this mysterious ladder; the steps are actions, the new prospect is power. Every several result is threatened and judged by that which follows. Every one seems to be contradicted by the new; it is only limited by the new. The new statement is always hated by the old, and, to those dwelling in the old, comes like an abyss of skepticism.
Peirce and James did formalised this way of thinking in such a way that it provided a philosophical approach to problem-solving. Peirce’s project was anti-Cartesian in approach and focus, whereas James was concerned with the concept of ‘truth’ – especially as it related to religious belief. In addition, they both discussed the skepticism to which Emerson alludes, rejecting it as debilitating. James in particular thought that cultivating a habit of doubt in relation to truth statements was indicative of an attitude rather than an intellectual position (Mounce, 1997:88). Skepticism is the result of confining one simply to the intellectual and theoretical sphere, as dangerous as confining one solely to the non-rational.
Instead, James argued that we should allow our ‘passional nature’ to help us decide upon the truth or falsity of statements and propositions:
Our passional nature not only lawfully may, but must decide an option between two propositions, whenever it is a genuine option that cannot by its nature be decided on intellectual grounds; for to say, under such circumstances, ‘Do not decide, but leave the question’ is itself a passional decision – just like deciding yes and no – and is attended with the same risk of losing the truth.(James, 1918:108)
Like the historian, we gain certainty through commitment, by leaving certain areas unquestioned. Certainty both in history and science comes through being ‘imperfectly theoretical’ – i.e. Being theoretical up to a point. As Mounce (1997:99) puts it, “It is only in philosophy, where commitment is at a minimum, that scepticism flourishes without limit.”
As a result, endless definitions do not serve to advance our understanding of the world and move closer towards truth. ‘Bachelor’ is a oft-cited example of a definition that means something precise. However, an alien to our planet would have to understand the institution of marriage, which cannot be easily explained in a sentence, before grasping the meaning of ‘bachelor’. Instead of definitions, then, it is the commitment to a statement, proposition or belief that helps us make our ideas clear. To use another example from Mounce, there is no sharp demarcation between day and night but we still find it useful to use these terms (Mounce, 1997:104).
It is precisely the fact that Pragmatism allows for error and chance that makes it a practical philosophy. Instead of committing ourselves to omniscience when using the words ‘know’ and ‘certainty’ we use them as practical instruments to go about our business in the world. I, for example, know that I am to attend a conference in a foreign country soon. I can express this certainty despite my attendance depending upon my continued health, an absence of airline strikes, and various geological phenomena not taking place.
For Pragmatists, and James in particular, truth becomes close to utility – what is ‘good in the way of belief’. James’ The Varieties of Religious Experience is a defence of this position. We cannot base beliefs on a theoretical conception of the world because this would, in effect, be a ‘view from nowhere’. Pragmatism, it will be remembered, is a philosophy that rejects the existence of an objective standpoint from which to ascertain the truth or falsity of a statement or belief. Reasoning is allied to experience rather than replacing it.
James was the original populariser of Pragmatism, the one who explained it to the intelligentsia of the early 20th century. However, it is important to briefly sketch the origins of Pragmatism in Peirce to understand the true aim of the overall project. Peirce rejected Cartesian dualism along with the Kantian distinction between the phenomenal and noumenal world. To Peirce and later Pragmatists, what Kant termed the noumenal world – the unknowable world ‘as it exists in itself’ – is a fiction. Likewise, Peirce rejected Descartes’ recommendation to start from a position of scepticism:
Philosophers of very diverse stripes propose that philosophy shall take its start from one or another state of mind in which no man, least of all a beginner in philosophy, actually is. One proposes that you should begin by doubting everything, and says that there is no one thing that you cannot doubt, as if doubting were as ‘easy as lying’… But, in truth, there is but one state from which you find yourself at the time you do ‘set out’ – a state of mind in which you are laden with an immense mass of cognition already formed, of which you can not divert yourself if you would; and who knows whether, if you could, you would not have made all knowledge impossible to yourself? Do you call it doubting to write down on a piece of paper that you doubt? If so, doubting has nothing to do with any serious business. But do not make believe; if pedantry has not eaten all reality out of you, recognise, as you must, that there is much that you do not doubt in the least. (Peirce, 1935(V) para 416:278, quoted in Mounce, 1997:21)
Meaning can only be grasped through practice, not through armchair philosophising, for Peirce and other Pragmatists. The ‘Pragmatic Maxim’ as formulated by Peirce states that a conception does not differ from another conception (either in logical effects or importance) other than in the way it could conceivably modify our practical conduct (Mounce, 1997:33).
It is this Pragmatic Maxim that I shall be using to test concepts surrounding ‘digital literacy’ in my Ed.D. thesis! 🙂
Goodman, R.B. (Ed.) (1995) Pragmatism: A Contemporary Reader
I don’t like using other people’s methods for doing stuff.
I don’t like storing files offline.
But I’ve made an exception. I’ve just bought Scrivener after using it for less than 24 hours. And that’s despite it having a 30 (non-consecutive) day trial. It’s going to revolutionise my writing of longer texts – like that Ed.D. thesis I’m almost half-way through…
So give it a try. But make sure you watch the introductory video first so you can do it some justice. 🙂
After last week’s post about designing opportunities for ‘creative ambiguity’ I had a brief Twitter conversation with @siibo about what exactly I meant. Which is kind of the point. :-p
The great thing that came out of it, however was being directed towards a couple of Japanese concepts of which I knew nothing previously, Wabi-sabi and Mono no aware.
Wabi-sabi represents a comprehensive Japanese world view or aesthetic centered on the acceptance of transience. The aesthetic is sometimes described as one of beauty that is “imperfect, impermanent, and incomplete”. It is a concept derived from the Buddhist assertion of the Three marks of existence specifically impermanence . Note also that the Japanese word for rust is also pronounced sabi… and there is an obvious semantic connection between these concepts.
Characteristics of the wabi-sabi aesthetic include asymmetry, asperity, simplicity, modesty, intimacy, and the suggestion of natural processes.
To me wabi-sabi is a different concept than creative ambiguity. Whereas the former is ‘imperfect, impermanent and incomplete’, those terms and concepts that provoke creative ambiguity often claim to be perfect, permanent and complete.
If we use the concept of ‘Digital Natives’ as an example, we can see that this creatively ambiguous term is imperfect, impermanent and incomplete only in retrospect. Much like Kuhnian ‘normal science’, many latched onto the idea of Digital Natives in order to build an edifice that could be applied more universally. It was only when there were too many problems with the concept that a period of ‘revolutionary science’ began. I would argue that we are still in this revolutionary phase.
Whilst I would love to proclaim that everyone should embrace wabi-sabi, it flies in the face of western academia and cultural practices. Pragmatically, therefore, it would be better (to my mind) to make people aware of how creative ambiguity can be useful in a specified period of time.
Mono no aware
Mono no aware (…literally “the pathos of things”), also translated as “an empathy toward things,” or “a sensitivity of ephemera,” is a Japanese term used to describe the awareness of mujo or the transience of things and a bittersweet sadness at their passing.
People like the status quo, it makes them feel safe. To my mind, the reason why we don’t like endings and change is that it reminds us that we will all eventually die. This sadness is summed up very nicely in mono no aware.
Something that is central to creative ambiguity is its time-limited nature. What from a Pragmatic point of view is ‘good in the way of belief’ at one point in time may not be at another. We must be willing to let go of terms that have served us well but no longer have any cash value.
Whilst I used to be a ruthless believer in Occam’s Razor, I’m now of the opinion that there’s actually no long-term harm in allowing a plethora of terms to proliferate. In fact, the more the better. The best, most useful terms – those that actually have some explanatory power, are good in the way of belief, and have some ‘cash value’ should win out.
We just need to be aware that, as wabi-sabi teaches us, nothing will ever be anything other than imperfect, impermanent and incomplete. And, as mono no aware teaches us, we should not be sad when a term outlasts it’s value! 🙂
I realised this week that I never posted the completed review. So here it is, for what it’s worth, in full! 😀
The Hyperlinked Society: Questioning Connections in the Digital Age
Joseph Turow and Lokman Tsui, Editors (2008)
Ann Arbor: University of Michigan Press
ISBN 0-472-05043-5 (pbk)
Reviewer: Doug Belshaw, Durham University
In the introduction to The Hyperlinked Society, editor Joseph Turow explains how the book became a follow-up to a 2006 conference that ‘came together to address the social implications of instant digital linking’. ‘We did not intend to solve any particular problem at the meeting,’ he writes. ‘Instead, the goal was to shed light on a remarkable social phenomenon that people in business and the academy usually take for granted.’ The stated aim of the resulting book? ‘[N]ot to drill deeply into particular research projects [but], rather, to write expansively, provocatively – even controversially – about the extent to which and ways in which hyperlinks are changing our worlds and why.’ The book, therefore, is published explicitly as a platform upon which others ‘will launch their own research projects and policy analyses.’ (p.5)
Given this stated aim, it is easier to forgive The Hyperlinked Society‘s unconventional structure and somewhat eclectic nature. There are three main sections to the book. The first, ‘Hyperlinks and the Organization of Attention’ is almost entirely descriptive, ostensibly to set the scene for the rest of the book. The second ‘Hyperlinks and the Business of Media’ appears incongruous in an academic book; the essays and articles it contains feature few references and assertions abound. The final section is the most rewarding for researchers and academics in the field of new literacies and internet culture. It features an abundance of analysis – everything from the moral nature of hyperlinks to what constitutes the ‘online public sphere’. This final section is worth the price of admission alone.
Puzzlingly, given the editor’s proud statement in the introduction that over 200 countries were represented at the conference that led to the book’s existence, the examples given are almost entirely taken from the USA. Moreover, the American political situation and how it reflects, and is reflected by, internet culture is a dominant theme. Indeed there is more than one reference to ‘our country’ and what ‘we’ need to do. This does not sit comfortably at times, making this (English) reviewer feel like an outsider.
But there is much to like and admire in The Hyperlinked Society even if, at times, the authors try and relate anything and everything to the concept of the hyperlink. The editors have discovered and successfully begun to fill a niche: that space between popular internet culture books such as Clay Shirky’s Here Comes Everybody and more traditional academic articles. The Hyperlinked Society successfully combines elements of both, especially in the third section and in particular Adamic’s The Social Hyperlink. This essay continues the collection’s dominant theme of political blogging, showing empirically that the ‘blogosphere’ is divided with hyperlinks mirroring political affiliations. Coupled with this, however, is a corrective to the possible conclusion that hyperlinks cause this ‘echo-chamber’ effect. An analysis of online communities in the USA, Kuwait and the UAE demonstrate the powerfully complex cultural and contextual factors at work. The reader is left fascinated, interested, and wanting more – especially given the ‘Do bloggers kill kittens?’ story with which Adamic ends the article. This, of course, fits hand-in-glove with the editor’s desire for others to use the collection as a starting point for their own research.
A second dominant theme in The Hyperlinked Society is whether hyperlinks constitute an inherently a ‘good’ or a ‘bad’ thing for society. Most deal with this in a cursory way, but Weinberger’s The Morality of Links confronts the issue head-on. In perhaps the most valuable and reflective essay in the collection, Weinberger analyses his personal belief that ‘Links are good’. His wide-ranging and knowledgeable philosophical treatment of the problem takes fully three pages of background, covering everything from a critique of Essentialism to the value of a funnel. Weinberger concludes that there are two reasons why ‘Links are good’. First, the Web is a huge potentiality – but not in the same way ‘a stick could potentially be used to prop up a car hood’ (p.189). Instead, ‘the potential is the sum of the relationships embodied in links’ which makes the Web ‘a potential that we’re actively creating and expanding’ (p.189). The second reason we’re better off with links, states Weinberger, is because ‘every time we click on a link, we take a step away from the selfish solipsism that characterizes our age – or, to be more exact, that characterizes how we talk about our age’ (p.189-90). The world, says Weinberger (quoting Ted Nelson) has never been so ‘intertwingled’.
The third and final dominant vein running through The Hyperlinked Society is the emancipatory nature of hyperlinks. Whilst several authors raise privacy concerns and implications , the general consensus is that through ‘mashups’, ‘countermapping’ and other online grassroots activities, traditional power structures are beginning to be challenged. Halavais, for instance, in The Hyperlink as Organizing Principle explains how the changing way hyperlinks are used represents ‘a kind of collective unconscious’ that represents ‘deep social and cultural structures’ (p.39). Halavais also points out, with some apparent glee, that researchers can passively track social relationships and connections through the aggregation of links – thus alleviating the ‘Hawthorne effect’ and bias inherent in self-reporting.
Finkelstein, in Google, Links and Popularity versus Authority highlights two important instances where technical issues relating to hyperlinks threatened to undermine their potential for emancipation and democracy. The first is what he deems ‘the commodificiation of social relations’. This is a result of ‘blurring the lines between business and friendship’ (p.115) that occurs online. A second, related, problem is that of search engine algorithms being based on inbound links. Google’s PageRank algorithm, for example, works a ‘weighted combination’ of factors centering around how popular the website is with other websites. Herein, of course, lies a problem. If you want to talk about the dangers of a racist hate site, making parents and teachers aware of the URL , linking to the site would be counter-productive. It would constitute an inbound link – and therefore improve the racist hate website’s Google PageRank. As a result, the ‘nofollow’ tag was invented to allow links in such cases without the attendant positive conferral of status (or ‘Google juice’ as it is commonly termed). This is an example of what The Hyperlinked Society does well as a collection, dealing with both the social and technical aspects of problems caused by Web-mediated communication.
The Hyperlinked Society is not an overly-edited collection. There are places where the same stories are told, the same studies cited, and similar ground covered. But given the and/and/and nature of hyperlinks and the Web, this is highly appropriate. Instead of fitting rigorously into a pre-determined order, the authors are free to explore their own interests in a way that suits them. Such a structure and approach works well, and serves to reinforce the themes outlined above: the case of political blogging, the nature of hyperlinks, and their emancipatory potential.
However, as a researcher into new literacies and 21st-century education practices, it was disappointing to see terms such as ‘link-literacy’, ‘savvy’ and ‘competence’ used uncritically. There is a wealth of research in this area towards which the individual authors or, at the very least, the editor could have directed the reader. Although Lankshear and Knobel’s Digital Literacies: Concepts, Policies and Practices was published in the same year as The Hyperlinked Society, their earlier volume New Literacies: Everyday Practices and Classroom Learning (2006) was available as a guide to the field.
Overall, The Hyperlinked Society is satisfying and informative when read in its totality, but also serves as an excellent reference point, with useful overviews to each section provided by the editors. It would be of most use to those running postgraduate courses exploring Web-related issues as it covers such a wide range of issues. The final section in particular is an object lesson on how to explore the wider implications of a very particular technology.
Lankshear, C. & Knobel, M. (2006) New Literacies: Everyday Practices and Classroom Learning. Open University Press
Lankshear, C. & Knobel, M. (2008) Digital Literacies: Concepts, Policies and Practices. Peter Lang Publishing
Shirky, C. (2008) Here Comes Everybody: The Power of Organizing Without Organizations. New York: Penguin Press
In the 1930s, William Empson came up with seven types of ambiguity. He applied them to poetry and literary criticism, but I believe they can be more applied more widely. Roughly, they are:
Word or grammatical structure is effective in several ways at once.
Two or more meanings are resolved into one.
Two ideas, relevant because of the context, are resolved into one.
Two more more meanings do not agree, but make clear a complicated state of mind in the author.
Author discovers idea in the act of writing.
Statement says nothing (e.g. tautology) so reader has to make up meaning.
Two meanings of the word or phrase are opposite within the context (shows division in writer’s mind)
I’ve long thought the concept of ‘digital literacy’ was an ambiguous one, and am beginning to look in which ways definitions of it are so. Although I’m still in the early stages of my analysis, it’s becoming clear that the view of ‘digital literacy’ held by official bodies in Europe is ambiguous in a very particular kind of way.
Take the following quotations, for example:
Information and communications technologies (ICTs) affect our lives every day – from interacting with our governments to working from home, from keeping in touch with our friends to accessing healthcare and education.
To participate and take advantage, citizens must be digitally literate – equipped with the skills to benefit from and participate in the Information Society. This includes both the ability to use new ICT tools and the media literacy skills to handle the flood of images, text and audiovisual content that constantly pour across the global networks.
Digital literacy is a process that affects at least four dimensions:
Operational: The ability to use computers and communication technologies.
Semiotic: The ability to use all the languages that converge in the new multimedia universe.
Cultural: A new intellectual environment for the Information Society.
Civic: A new repertoire of rights and duties relating to the new technological context.
In this sense, digital literacy today is similar to what UNESCO has defined for some time as “media education”. According to this organisation, media education “enables people to gain understanding of the communication media used in their society and the way they operate and to acquire skills in using these media to communicate with others”. To accept the similarity, we only need to acknowledge the evident fact that practically all media today are based on the use of digital technologies.
I believe these to be examples of the second type of ambiguity. That is to say that they involve a situation where ‘two or more meanings are resolved into one.’ Specifically, they combine media literacy with technical (and procedural) skills to form some kind of quasi-umbrella term that leans towards the third kind of ambiguity.
These kind of definitions of ‘digital literacy’ are common within the official literature of the European Commission and related bodies. Digital literacy becomes a hybrid notion that appears to have legitimacy because of the relatively straightforward notion that each word connotes. It is not clear, however, that forming the two words into a phrase results in anything meaningful.
Interestingly, Empson hints that ambiguity may be a three-dimensional process and that the seven types of ambiguity he identifies lie on a continuum. I think there’s definite scope for some visualization in my thesis… 😀
* ‘Advisor of the eLearning programme in the field of digital literacy, Universidad Autónoma de Barcelona, European Commission’
The focus of our discussion was my forthcoming submission of an article to an academic journal. Whilst my recent book review will be published in E-Learning and Digital Media 7:3 later this year this will (hopefully!) be the first time anything original of mine will be published in a peer-reviewed journal. I’m quite excited. 🙂
Regular readers know how open and candid I am about almost every area of my life via this blog and Twitter. I’m sure you’ll forgive me this once when I don’t go into too much detail about my proposed article; it would be easy to get scooped! Suffice to say I’m looking to apply a framework that should help understand just how exactly ‘literacies of the digital’ are ambiguous.
We also discussed the concept of Flow, popularised by Mihaly Csikszentmihalyi. I was a big fan of this theory when I first came across it, but now I realise it’s as empty a concept as ‘digital literacy’. Still, I do believe that such terms have some kind of Pragmatic utility – they are ‘good in the way of belief’. I’ve got a Venn diagram in mind to explain this in the article I’m writing.
Steve said something quite powerful in our conversation about ‘compressing depth of thought’. If you use too much terminology, compress ideas into too small a space and be overly concise then readers have to ‘read out’ rather than ‘read in’ to your work. If they’re not ‘reading in’ then they’re not applying. That, he says, is why ‘lighter, fluffier’ stuff gets more readily applied, whilst more ‘serious, focused’ stuff is sometimes ignored. I’ve certainly found that even with some of my blog posts.
Finally, I mentioned that if I heard someone uncritically use the term ‘digital native’ in my presence (or without tongue-firmly-in-cheek), I was likely to lay the smackdown on them. In fact, Prensky has since (in a 2009 article) moved onto talking about ‘digital wisdom’. He’s basically saying “I was wrong” without using so many words. Trouble is, he’s wrong about the digital wisdom too… :-p