Tag: History (page 1 of 3)

On the importance of human agency.

Update (6 October 2011) I awoke to the sad news that Steve Jobs, visionary former CEO of Apple, has died. If anyone exemplified the power of human agency, it was Steve.


Exhibit A

Thinking back to meetings I’ve attended over the years leads to many different experiences blurring into one. There are, however, a few of these experiences that do stand out and, of these, two in particular are quite memorable. The first was a staff briefing session at a school whilst I was on my first teaching practice in 2003. The second was in April 2010 when I joined JISC infoNet.

Why were they so memorable? Because they involved steep learning curves and made me think. The jargon and acronyms being bandied about were a useful shorthand to others but confused me. A few meetings later in each case and I was au fait with the terminology and, indeed, using it myself. I had built some ‘mental sandcastles’.

Exhibit B

I used to teach History. In fact, after my degree in Philosophy I self-funded an MA in Modern History to get on to the PGCE Secondary History course at Durham University. It’s fair to say I’m very interested in, and enjoy, reading and talking about history.

As a teenager, however, I was very nearly turned off History (as a subject) by reading some A.J.P. Taylor. Why? What really annoyed me was his ascribing human qualties to countries and states (e.g. talking of Germany as ‘She’) whilst abstracting away from individuals to make a point that suited his grand meta-narrative. Here’s an example of Taylor’s prose from Wikiquote:

The worker is by nature less imaginative, more level-headed than the capitalist. This is what prevents his becoming one. He is content with small gains. Trade Union officials think about the petty cash; the employer speculates in millions. You can see the difference in their representative institutions. There is no scheme too wild, no rumour too absurd, to be without repercussions on the Stock Exchange. The public house is the home of common sense.

Some people may like that kind of stuff, but to my mind it’s severely lacking in resonance. I don’t seem to inhabit the kind of world A.J.P. Taylor describes.

Conclusion

As I attempted to show with Exhibit A, jargon and acronyms can be useful if people are using them as a shorthand to express something that has previously been expressed in detail. Nevertheless, I think it’s probably a good idea to have meetings and conversations every so often where jargon and acronyms are banned. In my experience, people build ‘mental sandcastles’ ostensibly made of the same stuff as those created by others but actually differing based on their experiences, prejudices and preferences. Kicking down those sandcastle once in a while (to continue the metaphor) is probably a good idea.

Things don’t just happen. They are made to happen. This can be due to natural proceses but also, more often than not, by individual human agency. Organizations have agency, of course they do. An organization is a group of individuals who have come together around a common cause. That organization may seem to ‘express’ certain traits (e.g. a conservative outlook) but this remains the result of collective individual action.

So, to get to my main point in a rather roundabout way, when I see techno-determinist opinions (for that is what they are) dressed up as inevitable facts I have a similar reaction to that of my teenage self reading A.J.P. Taylor. You may well predict that the biggest trends in 2015 will be x, y and z. But, given that nobody predicted everything kicking off in the Middle East earlier this year, you’ll excuse me whilst I look at what people are actually doing whilst you peer into your crystal ball.

The future is ours to shape. Let’s not forget that.

Hog roasts, Amazon EC2 and traffic lights.

I feel that these images, some of the last taken by photojournalists Chris Hondros and Tim Hetherington in Libya before their deaths earlier this week, link in some way to the following. I’m just not sure how to work them in whilst retaining any form of subtlety.


Question: What links a hog roast, the recent Amazon EC2 outage and traffic lights?

Answer: The notion of Civil Society.

The London School of Economics’ Centre for Civil Society defines it thus:

Civil society refers to the arena of uncoerced collective action around shared interests, purposes and values. In theory, its institutional forms are distinct from those of the state, and market, though in practice, the boundaries between state, civil society, and market are often complex, blurred and negotiated. Civil society commonly embraces a diversity of spaces, actors and institutional forms, varying in their degree of formality, autonomy and power. Civil societies are often populated by organizations such as registered charities, development non-governmental organizations, community groups, women’s organizations, faith-based organizations, professional associations, trade unions, self-help groups, social movements, business associations, coalitions and advocacy groups. (via Wikipedia, my emphasis)

At lunchtime today my family and I are heading down to the pond and newly-formed wildlife reserve behind the small cul-de-sac in which we live. The pond was formed by some kind of mining-related sinkhole that I don’t pretend to understand. What I do understand is that it’s now a beautiful space that adds value to our life (and to our house). Today is the opening ceremony at which a hog roast will be enjoyed free-of-charge, with people coming together to celebrate the space. More spaces for meeting means a greater likelihood of unmediated interaction.

Last March I was in Turkey with my good friend and collaborator Nick Dennis at the request of EUROCLIO which is doing some work on behalf of the Dutch government. Nick and I helped train History teachers on the use of technology in education (see presentation here), part of a many-pronged strategy by part of the Dutch government aiming to raise the level of Turkish ‘civil society’ in preparation for the latter eventually joining the European Union.

With the Turkish educators, technology was a trojan horse as the whole point of the programme was to get across the ‘multiperspectivity’ of History and, once that was established, equip them to be able to communicate that to the next generation. The only other strategy I can remember from the ten or so mentioned was that of making sure that traffic lights both worked properly and were sequenced in ways that were ‘European standard’ (i.e. enabled a good flow of traffic). It’s the little things that make a big difference.

So far, so obvious. The wildlife reserve and our work in Turkey fairly relate directly to the definition of Civil Society given earlier. But what about the Amazon EC2 outage? What’s that got to do with anything?

First, some context:

Cloud computing is all very well until someone trips over a wire and the whole thing goes dark.

Reddit, Foursquare and Quora were among the sites affected by Amazon Web Services suffering network latency and connectivity errors this morning, according to the company’s own status dashboard.

Amazon says performance issues affected instances of its Elastic Compute Cloud (EC2) service and its Relational Database Service, and it’s “continuing to work towards full resolution”. These are hosted in its North Virginia data centre. (TechCrunch)

Having recently considered moving this blog to Amazon EC2 because it’s ‘never down’ I breathed a sigh of relief. Bluehost may be slower at serving up content than it used to be but at least it’s never completely failed me. Outsourcing via set-it-and-forget-it only works if you’ve got a backup plan.

All of this reminded me of Unhosted.org which, no doubt, has received a boost because of the EC2 outage:

Unhosted is an open web standard for decentralizing user data. On the unhosted web, data is stored per-user, under the user’s control. That’s where it belongs.

I’m no fan of privatising everything within society but I do think that sometimes we rely on the state and big businesses a little too much to provide things we can organize through communities and networks. Despite the #fail that is the Big Society in the UK the idea behind it remains sound. Unhosted is one way in which we can developers can come together as a force for good in the online sphere – just as the wildlife reserve and training Turkish educators were strategies in other spheres.

By way of conclusion, therefore, I’d like to challenge you. Be the change you want to see in the world. Small changes can have large symbolic actions and lead to a domino effect. Take next week’s #purposedpsi event in Sheffield, for example. There’ll only be about 50-60 people but, due to networks (and networks of networks) we’ll have a disproportionate effect on people’s thinking and conversation. There’s a few tickets left if you’d like to join us.

So, get out there and do something! Go and make the world a better place.

Revolutionary tools do not a revolution make.

20110321-101930.jpg

A lot has been made of about the role of social networking tools such as Facebook and Twitter in the uprisings in the Middle East and North Africa recently. Whilst I don’t know enough about Egypt, Libya and Bahrain to comment on their internal political situation, what I do know is that it takes more than the mere ‘potential’ of something to make a difference in practice.

And so it is with education. Mark Allen’s contribution to the #purposed debate reminded me of the important difference between something’s being available and an individual or group having the requisite skills and critical faculties to use it in a new, interesting, or even revolutionary way. As I mentioned in my comment on Mark’s blog, one of the reasons I think everyone should study a little Philosophy and History is because it prepares one to consider the ways things might, could or should be rather than being limited to tinkering within existing parameters.

So next time you read or hear of a technology or service that is going to, is, or has ‘revolutionised’ something, think of the context and milieu into which that tool or idea has been launched. As with Purpos/ed, it’s very likely you’ll find more than a hint of latent demand and the ‘adjacent possible’ in there. It’s never just about the tool or service.

Image CC BY Rev. Strangelove !!!!

Why everyone should learn a little History and Philosophy.

Inductive EmpiricismI’m all for breaking down the arbitrary and artificial barriers between ‘subjects’. I can remember having no idea what to specialise in at age 16 (and so hedging my bets with Maths and Physics on the one hand, and English Literature and History on the other). Despite this wish to see more osmosis between subject areas, the knowledge, skills and understanding that come under the headings ‘History’ and ‘Philosophy’ I believe to be especially important.

OK, so I’ve got degrees in both of them but their erosion, I believe, cuts us off from the past and alternative ways of thinking about the world around us. And that’s not a good thing.

I’ve just finished reading Tom Holland’s excellent, eloquent Millennium: the end of the world and the forging of Christendom and have just embarked upon Jared Diamond’s ambitious Collapse: how societies choose to fail or survive.* Diamond writes:

Past people were neither ignorant bad managers who deserved to be exterminated or dispossessed, nor all-knowing conscientious environmentalists who solved problems we can’t solve today. They were people like us, facing problems broadly similar to those we now face. They were prone either to succeed or to fail, depending on circumstances similar to those making us prone to succeed or fail today. Yes, there are differences between the situation we face today and that faced by past peoples, but there are enough similarities for us to be able to learn from the past.

It’s surprising, and encouraging, that many of those interested in educational technology have a background in the Humanities; the latter lends, I believe, a critical element that underpins a wider digital literacy.

I’ll be speaking several times this year on ‘The Essential Elements of Digital Literacy’. You can be sure that I’ll be stressing the importance of the criticality developed in the Humanities subjects over some of the shortsighted technological determinism that sometimes rears it’s ugly head online. I can say with some confidence that any time you wonder how Device X ‘will change education’ you’ve got it backwards.

So, long live History and Philosophy! (although not necessarily as discrete subject areas)**

Image CC BY-NC-SA mr lynch

*A good deal of my reading comes from serendipitous finds in secondhand bookshops. 🙂

**If you’re wondering, the choice of image for this post comes from it being one of the best tests I’ve found so far for the reading/understanding element of ‘digital literacy’. Why? Well, because you would have to understand:

  • The concept of a meme
  • That this is a derivation of a meme calledlolcats
  • How to search to find out what it’s referring to
  • Which websites to visit for reliable information on this (which to trust)

Why we don’t celebrate Hallowe’en in our house

As a write this post we’ve got the lights off at the front of our house and, instead of being parked on the drive, our car is parked in a nearby street. Why? It’s Hallowe’en.

It’s not that we live in a rough neighbourhood and I’m scared of the kids. It’s that I:

  • can’t (as a historian/philosopher) see the point in it
  • don’t wish to celebrate evil, even implicitly
  • think that it’s 99% marketing-fuelled

Ten quick facts about Hallowe’en:

  1. It’s not a pagan festival.
  2. It was originally a couple of days of feasting without much religious or supernatural significance.
  3. Before the 8th century it was celebrated in May.
  4. It’s related to the enthusiastic ringing of bells by Catholics on All Souls Day to assist the passage of souls from purgatory.
  5. Hallowe’en traditions almost completely died out in England before the 20th century.
  6. Around this time, girls traditionally attempted to find out via various ‘signs’ – such as brushing their hair at midnight in front of a mirror – who they would marry.
  7. In 1950s England people either celebrated Guy Fawkes night or Hallowe’en, depending on geographic location.
  8. There was an ‘explosion of interest’ in Hallowe’en in the 1970s/80s and ‘trick or treating’ due to the influence of American TV series and films such as ET (1982) which depicted such scenes.
  9. Teachers have been accused of encouraging the spread of Hallowe’en celebrations to remove the focus on Guy Fawkes (‘Bonfire’) Night and associated safety concerns.
  10. Hallowe’en parties in England have been going since around the 1920s/30s and are now the busiest time of the year for fancy-dress hire shops.

The above were gleaned from a book I came across this weekend at Barter Books. I added photos of relevant pages to my Evernote account.

So, in conclusion, dressing up as something scary and begging is not something I’ll be encouraging my children to do when they’re old enough. Whilst I could open the door and lecture each group of children, the words ‘water’ and ‘off a duck’s back’ spring to mind. And, to be honest, I don’t want to be ‘that guy’.

The power of the media and invented tradition is, unfortunately, too powerful.

A brief history of infographics.

I recently picked up the classic Designing Infographics: Theory, creative techniques & practical solutions by Eric K. Meyer for an absolute song. Published in 1997, the ‘practical solutions’ part is dated, but the theory and techniques section is as relevant as every. What really interested me was the opening section on the history of infographics, some of which I’d like to share with you.

If heiroglyphics count as infographics, then of course they are around 5,000 years old. Sumerian ‘letters’ were combined with pictures to explain concepts, provide explanations and tell stories. A little more recently in the western world, graphics have been used to represent quantitative data. One of the first examples of this is Nicole d’Oresme (1352-82), Bishop of Lisieux, who combined figures into groups and graphed them. Leonardo da Vinci was fond of mixing graphics and text, especially in his Treatise on Painting.

Modern infographics can be traced to William Playfair’s ‘information graphics’ for The Commercial & Political Atlas, published in 1786 and containing 44 graphics (mostly line, ‘fever’ or bar charts). Subsequently, Otto Neurath (1882-1945), a sociologist, developed the ‘Vienna method’. This stressed the importance of simple images to explain data. Neurath documented everything in graphic form that he researched statistically,  founding the ‘Isotype’ movement (International System of Typographic Picture Education) – an attempt at a world language without words. This, coupled with Modernism, had ‘a profound impact on graphics and design world-wide’. The London Underground map is a product of this movement:

The USA took longer to start using infographics, with the early adopters being Fortune magazine, the Chicago Tribune and the New York Times (the latter now being a leader in the field). Researchers Turnbull and Baird in 1962 realised the importance of infographics – in a world before the internet, 24-hour news and cable television:

Tests have proven that material of the same content has been received, read and acted upon in one form, but discarded in another. These examples, coupled with the knowledge that every reader is offered much more than he can ever assimilate, assert that graphic techniques are too important to be ignored.

By 1981 other newspapers were using infographics but it was the launch of USA Today in 1982 and its commitment to using graphics every day that started the real trend. Some of these, however (the types of bread – white, wheat or rye – preferred by members of Congress) were merely filler. In Germany, Der Spiegel had been experimenting with more artistic infographics since the mid-1950s.

The dawn of computers had a massive effect on infographics. ‘Desktop publishing’ became more than just a casual phrase when desktop computers, partnered with the first laser printers, led to reductions in newspaper department workloads by 15-20 hours per week. This freed up time to experiment with infographics. With programs available for the Apple Mac such as MacDraw, newspapers no longer required skilled artists laboriously hand-drawing each infographic.

As the processing power of computers grew, so did their ability to represent complex data in a visually-appealing way. In 1990, research carried out by the Gallup Organization showed that graphic elements possessed greater power than originally thought. They used computerized headgear to record what readers saw on a page, noticing that visual elements received a great deal of attention. Follow-up studies confirmed this and that readers were left with more memorable impressions than when presented with words only.

The dawn of the internet has led to an explosion in interest and use of infographics. Many and diverse software packages and web applications are available to represent your data visually. If you’re interested, try the following three:

‘Information literacy’: its history and problems.

This is part of my Ed.D. literature review, part of my ongoing thesis which can be found at http://dougbelshaw.com/thesis. You can view everything I’ve written on this blog for and about my thesis here)

perlin flow particle ribbon 1079

CC BY-NC anthony mattox

Information literacy is a term that was coined in the 1970s but which has undergone a number of transformations to keep it current and relevant. Unlike ‘technological literacy,’ ‘computer literacy,’ and ‘ICT literacy’ is it is not technology-related (and therefore likely to become outdated), nor is it a corrective to an existing ‘literacy’ (as with ‘visual literacy’). Because it is not dependent upon any one technology or set of technologies, ‘information literacy’ has been eagerly taken onboard by librarians (Martin 2008:160) and governments (Fieldhouse & Nicholas, 2008:50) alike. Indeed more recently it has been defined as a ‘habit of mind’ rather than a set of skills:

[I]nformation literacy is a way of thinking rather than a set of skills… It is a matrix of critical and reflective capacities, as well as disciplined creative thought, that impels the student to range widely through the information environment… When sustained through a supportive learning environment at course, program or institutional level, information literacy can become a dispositional habit… a “habit of mind” that seeks ongoing improvement and self-discipline in inquiry, research and integration of knowledge from varied sources. (Center for Intellectual Property in the Digital Environment, 2005:viii-ix)

Although evident in the literature since the 1970s, the concept of ‘information literacy’ gained real traction in the 1990s with the advent of mass usage of the internet. Suddenly information was a few effortless keystrokes and mouse clicks away rather than residing in great tomes in a physical place. Accessing this information and using it correctly constituted, for proponents of the concept, a new ‘literacy’. This was a time when politicians used the term ‘Information Superhighway’ to loosely describe the opportunities afforded by the internet.

‘Information literacy’ as a term was boosted greatly by a definition and six-stage model for developing the concept agreed upon by the American Libraries Association in 1989. The committee tasked with investigating information literacy proposed that an ‘information literate person’ would ‘recognize when information is needed and have the ability to locate, evaluate, and use effectively the needed information’ (quoted in Fieldhouse & Nicholas, 2008:52). Achieving the state of being ‘information literate’ involves passing through six stages, outlined in Bawden (2008:21-22):

  1. Recognizing a need for information
  2. Identifying what information is needed
  3. Finding the information
  4. Evaluating the information
  5. Organizing the information
  6. Using the information

Boekhorst (quoted in Virkus, 2003) believes that, indeed, all definitions of information literacy presented over the years can be summarized in three concepts. First there is the ICT concept: using ICT to ‘retrieve and disseminate information.’ Second is the information resources concept: the ability to find resources independently ‘without the aid of intermediaries.’ Finally comes the information process concept: ‘recognizing information need, retrieving, evaluating, using and disseminating of information to acquire or extend knowledge.’ As such information literacy has at times been seen as including computer-related literacies, sometimes as part of such literacies, and sometimes as being tangential to them.

From these statements in the late 1980s/early 1990s information literacy developed to include an ethical dimension (‘knowing when and why you need information, where to find it, and how to evaluate, use and communicate it in an ethical manner’ – SCONUL (1999) quoted in Fieldhouse & Nicholas, 2008:52) and an economic dimenstion (‘Information literacy will be essential for all future employees’ – Langlois (1997) quoted in Martin, 2003:7). Information literacy has been seen as a ‘liberal art’ with an element of critical reflection (Shapiro & Hughes (1996) in Spitzer, et al., 1998:24), critical evaluation (Open University Library website, in Virkus, 2003), and as involving problem-solving and decision-making dimensions (Bruce, 1997).

The problem with such a definitions and models is that they continue to view literacy as a state which can be achieved rather than an ongoing process and group of practices. However much ‘information literacy’ may be praised for being an inclusive term (Doyle, 1994), be evident in the policy documents produced by western governments (Fieldhouse & Nicholas, 2008:50) and seen as ‘essential’ to the success of learners, it has ‘no agreed definition’ (Muir & Oppenheim in Virkus, 2003). It is, in the words of Stephen Foster ‘a phrase in a quest for meaning’ (Snavely & Cooper, 1997:10). How, he wonders, would we recognize, and seek to remedy, ‘information illiteracy‘?

However many theorists propose it as an ‘overaching literacy of life in the 21st century’ (Bruce, 2002) and bodies such as the US Association of Colleges and Research Libraries come up with ‘performance indicators’ for the concept (Martin, 2008:159), ‘information literacy’ suffers from a lack of descriptive power. It is too ambitious in scope, too wide-ranging in application and not precise enough in detail to be useful in an actionable way. Even a move from talking about being ‘information literate’ to ‘information savvy’ (Fieldhouse & Nicholas, 2008:47) runs into difficulties for the same reasons. Definitions of the concept are too ‘objective’ and independent of the learner – even when described as ‘seven key characteristics’ (Bruce, cited in Bawden, 2008:22-23).

(References can be found at my wiki. Want more? You may have missed my post The history of ‘new literacies’)


The history of ‘new literacies’.

This section of my Ed.D. literature review is nearing completion, so I thought I’d share it! (although, of course, the whole thing is available via http://dougbelshaw.com/thesis)

No single, unitary referent for 'literacy'

The field of ‘new literacies’ has a relatively long history; it is a term that has evolved. Its beginnings can be traced back to the end of the 1960s when a feeling that standard definitions of ‘literacy’ missed out something important from the increasingly visual nature of the media produced by society. In 1969 John Debes offered a tentative definition for a concept he called ‘visual literacy’:

Visual Literacy refers to a group of vision-competencies a human being can develop by seeing and at the same time having and integrating other sensory experiences. The development of these competencies is fundamental to normal human learning. When developed, they enable a visually literate person to discriminate and interpret the visible actions, objects, symbols, natural or man-made, that he encounters in his environment. Through the creative use of these competencies, he is able to communicate with others. Through the appreciative use of these competencies, he is able to comprehend and enjoy the masterworks of visual communication. (Debes, quoted in Avgerinou & Ericson, 1997:281)

Dondis in A Primer in Visual Literacy (1973) made explicit the reasoning behind considering visual elements as requiring a separate ‘literacy’:

In print, language is the primary element, while visual factors, such as the physical setting or design format and illustration, are secondary or supportive. In the modern media, just the reverse is true. The visual dominates; the verbal augments. Print is not dead yet, nor will it ever be, but nevertheless, our language-dominated culture has moved perceptively toward the iconic. Most of what we know and learn, what we buy and believe, what we recognize and desire, is determined by the domination of the human psyche by the photograph. And it will be more so in the future. (quoted in Barry, 1997:1)

Those who espoused this doctrine were careful to stress the importance of both being able to both decode and encode, creating and communicating via images. Considine (1986) championed visual literacy as being ‘the ability to comprehend and create images in a variety of media in order to communicate effectively,’ leading to those who are ‘visually literate’ being ‘able to produce and interpret visual messages’ (quoted in Tyner, 1998:105). More recently, with the explosion of what I shall term ‘micro-literacies,’ the concept of ‘visual literacy’ has been re-conceived of as ‘media grammar literacy’ (Frechette, quoted in Buckingham & Willett, 2006:168-9). That is to say it stresses the medium as being at least as important as the message.

In essence, the notion of ‘visual literacy’ is an important corrective to the idea that it is only textual symbols that can encode and decode information and meaning. As Lowe (1993:24) puts it, ‘visual materials in general are typically not considered to pose any reading challenges to the viewer.’ This is considered in more depth by Paxson (2004:vi), Sigafoos & Green (2007:29), Bazeli & Heintz (1997:4) and Kovalchik & Dawson (2004:602). As Raney (quoted in Owen-Jackson, 2002:141) explains, coupling ‘visual’ with ‘literacy’ not only prompts a debate about the metaphorical use of language but, by using ‘literacy’ suggests ‘entitlement or necessity, and the need to seek out deficiencies and remedy them.’

Hijacking the term ‘literacy’ for such ends has, however, worried some who believe that it conflates ‘literacy’ with ‘competence’ (Adams & Hamm, in Potter, 2004:29). Whilst some in the early 1980s believed that ‘visual literacy’ may ‘still have some life left in it’ (Sless, in Avgerinou & Ericson, 1997:282), others considered the concept ‘phonologically, syntactically, and semantically untenable’ (Cassidy & Knowlton, in Avgerinou & Ericson, 1997:282), as ‘not a coherent area of study but, at best, an ingenious orchestration of ideas’ (Suhor & Little, in Avgerinou & Ericson, 1997:282). Each writer on the term has written from his or her viewpoint, leading to a situation akin to the apocryphal story of the six blind men tasked with describing an elephant, each doing so differently when given a different part to feel (Burbank & Pett, quoted in Avgerinou & Ericson, 1997:283). The feeling from the literature seems to be that whilst there may be something important captured in part by the term ‘visual literacy’, it all too easily collapses into solipsism and therefore loses descriptive and explanatory power.

The concept of ‘visual literacy’ continued until the late 1990s, eventually being enveloped by ‘umbrella terms’ combining two or more ‘literacies.’ Parallel to visual literacy from the 1970s onwards came the development of the term ‘technological literacy.’ It began to gain currency as a growing awareness took hold of the potential dangers to the environment of technological development as well as economic fears in the western world about the competition posted by technologically more adept nations (Martin, 2008:158). ‘Technological literacy’ (or ‘technology literacy’) was a marriage of skills-based concerns with a more ‘academic’ approach, leading to a US government-funded publication entitled Technology for All Americans. This defined ‘technological literacy’ as combining ‘the ability to use… the key systems of the time,’ ‘insuring that all technological activities are efficient and appropriate,’ and ‘synthesiz[ing]… information into new insights.’ (quoted in Martin, 2008:158) This literacy was one defined and prompted by economic necessities and political concerns.

Although stimulated by competition with non-western countries, a growing awareness in the 1980s that computers and related technologies were producing a ‘postmodern consciousness of multiple perspectives’ with young people ‘culturally positioned by the pervasiveness of computer-based and media technologies’ (Smith, et al., 1988, quoted in Johnson-Eilda, 1998:211-2) reinforced the need for the formalization of some type of literacy relating to the use of computers and other digital devices. Technological literacy seemed to be an answer. Gurak (2001:13) dubbed this a ‘perfomative’ notion of literacy, ‘the ability to do something is what counts.’ Literacy was reduced to being ‘technology literate’ meaning ‘knowing how to use a particular piece of technology.’ The ‘critical’ element of literacy, which Gurak is at pains to stress, including the ability to make meta-level decisions judgements about technology usage, were entirely absent from these 1970s and 80s definitions. Technological or technology literacy is too broad a concept as ‘nearly all modes of communication are technologies – so there is no functional distinction between print-based literacy and digital literacy.’ (Eyman, no date:7) Discussions about, and advocates of, ‘technological literacy’ had mostly petered out by the late 1980s/early 1990s.

Growing out of the perceived need for a ‘technological literacy’ came, with the dawn of the personal computer, calls for definitions of a ‘computer literacy.’ Before the Apple II, ‘microcomputers’ were sold in kit form for hobbyists to assemble themselves. With the Apple II in 1977, followed by IBM’s first ‘Personal Computer’ (PC) in 1981, computers became available to the masses. Graphical User Interfaces (GUIs) were developed from the early 1980s onwards, with the first iteration of Apple’s ‘Finder’ coming in 1984 followed by Microsoft’s ‘Windows’ in 1985. There is a symbiotic link between the hardware and software available at any given time and the supposed skills, competencies and ‘literacies’ that accompany their usage. As computers and their interfaces developed so did conceptions of the ‘literacy’ that accompany their usage.

The term ‘computer literacy’ was an attempt to give a vocational aspect to the use of computers and to state how useful computers could be in almost every area of learning (Buckingham, 2008:76). Definitions of computer literacy from the 1980s include ‘the skills and knowledge needed by a citizen to survive and thrive in a society that is dependent on technology’ (Hunter, 1984 quoted in Oliver & Towers, 2000), ‘appropriate familiarity with technology to enable a person to live and cope in the modern world’ (Scher, 1984 quoted in Oliver & Towers, 2000), and ‘an understanding of computer characteristics, capabilities and applications, as well as an ability to implement this knowledge in the skilful and productive use of computer applications’ (Simonson, et al., 1987 quoted in Oliver & Towers, 2000). As Andrew Molnar, who allegedly coined the term, points out ‘computer literacy,’ like ‘technological literacy’ is an extremely broad church, meaning that almost anything could count as an instance of the term:

We started computer literacy in ’72 […] We coined that phrase. It’s sort of ironic. Nobody knows what computer literacy is. Nobody can define it. And the reason we selected [it] was because nobody could define it, and […] it was a broad enough term that you could get all of these programs together under one roof” (“Interview with Andrew Molnar,” OH 234. Center for the History of Information Processing, Charles Babbage Institute, University of Minnesota, quoted at http://encyclopedia2.thefreedictionary.com/Digital+literacy).

Later in the decade an attempt was made to equate computer literacy with programming ability:

It is reasonable to suggest that a peson who has written a computer program should be called literate in computing. This is an extremely elementary definition. Literacy is not fluency. (Nevison, 1976 quoted in Martin (2003:12)

In the 1980s applications available from the command line removed the need for users to be able to program the application in the first place. Views on what constituted ‘computer literacy’ changed as a result. The skills and attributes of a user who is said to be ‘computer literate,’ became no more tangible, however, and simply focused on the ability to use computer applications rather than the ability to program (Van Leeuwen, et al., in Cunningham, 2006:1580). On reflection, it is tempting to call the abilities that fell within the sphere of ‘computer literacy’ as competencies – as a collection of skills that can be measured using, for example, the European Computer Driving License (ECDL). By including the word ‘literacy,’ however, those unsure about the ‘brave new world’ of computers could be reassured that the digital frontier is not that different after all from the physical world with which they are familiar (Bigum, in Snyder (ed.) 2002:133). Literacy once again was used to try to convey and shape meaning from a rather nebulous and loosely-defined set of skills.

Martin (2003, quoted in Martin 2008:156-7) has identified conceptions of ‘computer literacy’ as passing through three phases. First came the Mastery phase which lasted up until the mid-1980s. In this phase the computer was perceived as ‘arcane and powerful’ and the emphasis was on programming and gaining control over it. This was followed by the Application phase from the mid-1980s up to the late 1990s. The coming of simple graphical interfaces such as Windows 3.1 allowed computers to be used by the masses. Computers began to be used as tools for education, work and leisure. This is the time when many certification schemes based on ‘IT competence’ began – including the ECDL. From the late 1990s onwards came the Reflective phase with the ‘awareness of the need for more critical, evaluative and reflective approaches.’ (Martin 2008:156-7) It is during this latter phase that the explosion of ‘new literacies’ occurred.

The main problem with computer literacy was the elision between ‘literacy’ as meaning (culturally-valued) knowledge and ‘literacy’as being bound up with the skills of reading and writing (Wiley, 1996 quoted in Holme, 2004:1-2). Procedural knowledge about how to use a computer was conflated with the ability to use a computer in creative and communicative activities. The assumption that using a computer to achieve specified ends constituted a literacy began to be questioned towards the end of the 1990s. A US National Council Report from 1999 questioned whether today’s ‘computer literacy’ would be enough in a world of rapid change:

Generally, ‘computer literacy’ has acquired a ‘skills’ connotation, implying competency with a few of today’s computer applications, such as word processing and e-mail. Literacy is too modest a goal in the presence of rapid change, because it lacks the necessary ‘staying power’. As the technology changes by leaps and bounds, existing skills become antiquated and there is no migration path to new skills. A better solution is for the individual to plan to adapt to changes in the technology. (quoted in Martin, 2003:16)

Literacy is seen as fixed entity under this conception, as a state rather than a process.

It became apparent that ‘definitions of computer literacy are often mutually contradictory’ (Talja, 2005 in Johnson, 2008:33), that ‘computer literacy’ might not ‘convey enough intellectual power to be likened to textual literacy,’ (diSessa, 2000:109), and with authors as early as 1993 talking of ‘the largely discredited term ‘computer literacy” (Bigum & Green, 1993:6). Theorists scrambled to define new and different terms. An explosion and proliferation of terms ranging from the obvious (‘digital literacy’) to the awkward (‘electracy’) occurred. At times, this seems to be as much to do with authors making their name known as provide a serious and lasting contribution to the literacy debate.

As the term ‘computer literacy’ began to lose credibility and the use of computers for communication became more mainstream the term ‘ICT literacy’ (standing for ‘Information Communications Technology’) became more commonplace. Whereas with ‘computer literacy’ and the dawn of GUIs the ‘encoding’ element of literacy had been lost, this began to be restored with ‘ICT literacy.’ The following definition from the US-based Educational Testing Service’s ICT Literacy Panel is typical:

ICT literacy is using digital technology, communications tools, and/or networks to access, manage, integrate, evaluate, and create information in order to function in a knowledge society. (ETS ICT Literacy Panel, 2002:2)

The skills outlined in this definition are more than merely procedural, they are conceptual. This leads to the question as to whether ICT literacy is an absolute term, ‘a measure of a person’s total functional skills in ICT’ or ‘a relative measure’ – there being ICT literacies, with individuals on separate scales (Oliver & Towers, 2000). Those who believe it to be an absolute term have suggested a three-stage process to become ICT literate. First comes the simple use of ICT (spreadsheets, word processing, etc.), followed by engagement with online communities, sending emails and browsing the internet. Finally comes engagement in elearning ‘using whatever systems are available’ (Cook & Smith, 2004). This definition of literacy is rather ‘tools-based’ and is analagous to specifying papyrus rolls, fountain pens or even sitting in a library on the classical definition. A particular literacy is seen as being reliant upon particular tools rather than involving a meta-level definition.

The problem is that, as with its predecessor term, ‘ICT literacy’ means different things to different groups of people. The European Commission, for example conceives of ICT literacy as ‘learning to operate… technology’ without it including any ‘higher-order skills such as knowing and understanding what it means to live in a digitalized and networked society.’ (Coutinho, 2007). This is direct opposition to the ETS definition above – demonstrating the fragmented and ambiguous nature of the term. Town (2003:53) sees ‘ICT literacy’ In the United Kingdom as

a particularly unfortunate elision’ as it ‘appears to imply inclusion of information literacy, but in fact is only a synonym for IT (or computer) literacy. Its use tends to obscure the fact that information literacy is a well developed concept separate from IT (information technology) literacy.

As Town goes on to note, this is not the case in non English-speaking countries.

(Please see http://dougbelshaw.com/thesis for references/bibliography. To avoid making a long post even longer, I shall post separately my section on ‘information literacy’) 🙂

Raising achievement in History at KS4 using e-learning

SHP 2009 slides

Click here to go straight to the slides

I’m at the annual Schools History Project Conference for the fifth time this weekend and am presenting for the third time. This is the first time that I’ll be presenting without my partner in crime, Nick Dennis, as he’s unable to make the conference. It’s a shame, but it means I can focus entirely on what I did with my Year 10 History class this academic year at my previous school.

I’ve used the Cooliris presentation method, pioneered by Alan Levine, and which I piloted in my Open Source School presentation earlier this month. I’m not so sure he uses a Nintendo Wiimote (along with Darwiin Remote) with Cooliris, though. It’s an excellent presentation method – and free if you create your slides in OpenOffice.org (as I do!) 😀

The easiest way to share the link directly to the slides that go with this presentation is to go to:

http://bit.ly/SHP2009

Links (in order mentioned) to the websites mentioned in the presentation can be found below:

Reblog this post [with Zemanta]

My presentation @ TeachMeet Midlands 2009

TeachMeet Midlands 2009

This evening I’ll be attending TeachMeet Midlands 2009 at the National College for School Leadership in Nottingham. If you’ve never heard of a TeachMeet before, they’re based around the idea of an unconference, ‘facilitated, participant-driven conference centered around a theme or purpose.’ (Wikipedia) I’ve been to a couple before – both of which were additions to the BETT Show – and they’re great events. There’s a fantastic buzz around the place, people passionate about what they do, and it’s a wonderful way to not only meet up with people you’ve only talked to online, but to come across new faces as well! 🙂

My (micro)presentation

I’ve signed up on the TeachMeet wiki to do a 7-minute micropresentation. Initially, I was going to talk about my role this year as E-Learning Staff Tutor and a bit about my Ed.D. on digital literacy. However, TeachMeets should be a lot more focused on classroom practice, so I’ve decided to instead talk about what I’ve been doing with my Year 10 History class.

This year I saw my having a new, fairly able GCSE History class as a good opportunity to try out some new methods and approaches to the course. As students at my school now have four lessons of their option subject per week instead of three, I decided to have one of them timetabled in an ICT suite. The room I was allocated has tiered seating and laptops, which was even better! :-p

After looking at various options, I decided to use Posterous for their homework blogs. Reasons for this include:

  • Blog posts can be written by email.
  • It deals with media in an ‘intelligent’ way (e.g. using Scribd to embed documents, making slideshows out of images)
  • Avatars allow for personalization.

I set almost no homework apart from on their blogs. This means that on a Friday they start an activity using (usually) a Web 2.0 service and then add it to their blog via embedding or linking. The only problem with this has been Posterous not supporting iFrames, meaning that Google Docs, for example have to be exported to PDF and then uploaded. Students are used to this now and it doesn’t really affect their workflow.

Examples of student work

Links to all blogs can be found at http://mrbelshaw.posterous.com

Student feedback

I should, perhaps, have asked for parental permission to video students’ opinions about this approach. From what they tell me, they greatly enjoy working on their blogs. In fact, a Geography teacher at school has hijacked one of my students’ blogs so she does work for both History and Geography on it! I think they appreciate the following things:

  • Presentation (a lot easier, especially for boys, to produce good-looking work)
  • Multimedia (they’re not looking at paper-based stuff all the time)
  • Collaboration (they get to work with others whilst still having ‘ownership’ of the final product on their blogs)

It’s a system that I’d definitely recommend and I shall be using in future! 😀

Short URL for this post (for Twitter, etc.) = http://bit.ly/4jD6V

css.php