Page 135 of 196

#eduhivefive (a suggestion).

This follows on a previous post r.e. the problem with (non-OSS) free stuff.

Image: ‘Bees

I’ll keep this short.

Lifehacker has a great regular thing called Hive Five for software/productivity recommendations. It goes like this:

  1. Question asked: ‘What’s the best x for y?’
  2. People respond.
  3. Five most mentioned in a positive way become ‘recommended’.

We should totally do this for education. I’ve created a wiki at http://eduhivefive.wikispaces.com in anticipation. :-p

Perhaps, given the demise of Etherpad, we could kick off with: “What’s the best online tool for collaborative writing?” and use #eduhivefive and #writing as hashtags?

The history of ‘new literacies’.

This section of my Ed.D. literature review is nearing completion, so I thought I’d share it! (although, of course, the whole thing is available via http://dougbelshaw.com/thesis)

No single, unitary referent for 'literacy'

The field of ‘new literacies’ has a relatively long history; it is a term that has evolved. Its beginnings can be traced back to the end of the 1960s when a feeling that standard definitions of ‘literacy’ missed out something important from the increasingly visual nature of the media produced by society. In 1969 John Debes offered a tentative definition for a concept he called ‘visual literacy’:

Visual Literacy refers to a group of vision-competencies a human being can develop by seeing and at the same time having and integrating other sensory experiences. The development of these competencies is fundamental to normal human learning. When developed, they enable a visually literate person to discriminate and interpret the visible actions, objects, symbols, natural or man-made, that he encounters in his environment. Through the creative use of these competencies, he is able to communicate with others. Through the appreciative use of these competencies, he is able to comprehend and enjoy the masterworks of visual communication. (Debes, quoted in Avgerinou & Ericson, 1997:281)

Dondis in A Primer in Visual Literacy (1973) made explicit the reasoning behind considering visual elements as requiring a separate ‘literacy’:

In print, language is the primary element, while visual factors, such as the physical setting or design format and illustration, are secondary or supportive. In the modern media, just the reverse is true. The visual dominates; the verbal augments. Print is not dead yet, nor will it ever be, but nevertheless, our language-dominated culture has moved perceptively toward the iconic. Most of what we know and learn, what we buy and believe, what we recognize and desire, is determined by the domination of the human psyche by the photograph. And it will be more so in the future. (quoted in Barry, 1997:1)

Those who espoused this doctrine were careful to stress the importance of both being able to both decode and encode, creating and communicating via images. Considine (1986) championed visual literacy as being ‘the ability to comprehend and create images in a variety of media in order to communicate effectively,’ leading to those who are ‘visually literate’ being ‘able to produce and interpret visual messages’ (quoted in Tyner, 1998:105). More recently, with the explosion of what I shall term ‘micro-literacies,’ the concept of ‘visual literacy’ has been re-conceived of as ‘media grammar literacy’ (Frechette, quoted in Buckingham & Willett, 2006:168-9). That is to say it stresses the medium as being at least as important as the message.

In essence, the notion of ‘visual literacy’ is an important corrective to the idea that it is only textual symbols that can encode and decode information and meaning. As Lowe (1993:24) puts it, ‘visual materials in general are typically not considered to pose any reading challenges to the viewer.’ This is considered in more depth by Paxson (2004:vi), Sigafoos & Green (2007:29), Bazeli & Heintz (1997:4) and Kovalchik & Dawson (2004:602). As Raney (quoted in Owen-Jackson, 2002:141) explains, coupling ‘visual’ with ‘literacy’ not only prompts a debate about the metaphorical use of language but, by using ‘literacy’ suggests ‘entitlement or necessity, and the need to seek out deficiencies and remedy them.’

Hijacking the term ‘literacy’ for such ends has, however, worried some who believe that it conflates ‘literacy’ with ‘competence’ (Adams & Hamm, in Potter, 2004:29). Whilst some in the early 1980s believed that ‘visual literacy’ may ‘still have some life left in it’ (Sless, in Avgerinou & Ericson, 1997:282), others considered the concept ‘phonologically, syntactically, and semantically untenable’ (Cassidy & Knowlton, in Avgerinou & Ericson, 1997:282), as ‘not a coherent area of study but, at best, an ingenious orchestration of ideas’ (Suhor & Little, in Avgerinou & Ericson, 1997:282). Each writer on the term has written from his or her viewpoint, leading to a situation akin to the apocryphal story of the six blind men tasked with describing an elephant, each doing so differently when given a different part to feel (Burbank & Pett, quoted in Avgerinou & Ericson, 1997:283). The feeling from the literature seems to be that whilst there may be something important captured in part by the term ‘visual literacy’, it all too easily collapses into solipsism and therefore loses descriptive and explanatory power.

The concept of ‘visual literacy’ continued until the late 1990s, eventually being enveloped by ‘umbrella terms’ combining two or more ‘literacies.’ Parallel to visual literacy from the 1970s onwards came the development of the term ‘technological literacy.’ It began to gain currency as a growing awareness took hold of the potential dangers to the environment of technological development as well as economic fears in the western world about the competition posted by technologically more adept nations (Martin, 2008:158). ‘Technological literacy’ (or ‘technology literacy’) was a marriage of skills-based concerns with a more ‘academic’ approach, leading to a US government-funded publication entitled Technology for All Americans. This defined ‘technological literacy’ as combining ‘the ability to use… the key systems of the time,’ ‘insuring that all technological activities are efficient and appropriate,’ and ‘synthesiz[ing]… information into new insights.’ (quoted in Martin, 2008:158) This literacy was one defined and prompted by economic necessities and political concerns.

Although stimulated by competition with non-western countries, a growing awareness in the 1980s that computers and related technologies were producing a ‘postmodern consciousness of multiple perspectives’ with young people ‘culturally positioned by the pervasiveness of computer-based and media technologies’ (Smith, et al., 1988, quoted in Johnson-Eilda, 1998:211-2) reinforced the need for the formalization of some type of literacy relating to the use of computers and other digital devices. Technological literacy seemed to be an answer. Gurak (2001:13) dubbed this a ‘perfomative’ notion of literacy, ‘the ability to do something is what counts.’ Literacy was reduced to being ‘technology literate’ meaning ‘knowing how to use a particular piece of technology.’ The ‘critical’ element of literacy, which Gurak is at pains to stress, including the ability to make meta-level decisions judgements about technology usage, were entirely absent from these 1970s and 80s definitions. Technological or technology literacy is too broad a concept as ‘nearly all modes of communication are technologies – so there is no functional distinction between print-based literacy and digital literacy.’ (Eyman, no date:7) Discussions about, and advocates of, ‘technological literacy’ had mostly petered out by the late 1980s/early 1990s.

Growing out of the perceived need for a ‘technological literacy’ came, with the dawn of the personal computer, calls for definitions of a ‘computer literacy.’ Before the Apple II, ‘microcomputers’ were sold in kit form for hobbyists to assemble themselves. With the Apple II in 1977, followed by IBM’s first ‘Personal Computer’ (PC) in 1981, computers became available to the masses. Graphical User Interfaces (GUIs) were developed from the early 1980s onwards, with the first iteration of Apple’s ‘Finder’ coming in 1984 followed by Microsoft’s ‘Windows’ in 1985. There is a symbiotic link between the hardware and software available at any given time and the supposed skills, competencies and ‘literacies’ that accompany their usage. As computers and their interfaces developed so did conceptions of the ‘literacy’ that accompany their usage.

The term ‘computer literacy’ was an attempt to give a vocational aspect to the use of computers and to state how useful computers could be in almost every area of learning (Buckingham, 2008:76). Definitions of computer literacy from the 1980s include ‘the skills and knowledge needed by a citizen to survive and thrive in a society that is dependent on technology’ (Hunter, 1984 quoted in Oliver & Towers, 2000), ‘appropriate familiarity with technology to enable a person to live and cope in the modern world’ (Scher, 1984 quoted in Oliver & Towers, 2000), and ‘an understanding of computer characteristics, capabilities and applications, as well as an ability to implement this knowledge in the skilful and productive use of computer applications’ (Simonson, et al., 1987 quoted in Oliver & Towers, 2000). As Andrew Molnar, who allegedly coined the term, points out ‘computer literacy,’ like ‘technological literacy’ is an extremely broad church, meaning that almost anything could count as an instance of the term:

We started computer literacy in ’72 […] We coined that phrase. It’s sort of ironic. Nobody knows what computer literacy is. Nobody can define it. And the reason we selected [it] was because nobody could define it, and […] it was a broad enough term that you could get all of these programs together under one roof” (“Interview with Andrew Molnar,” OH 234. Center for the History of Information Processing, Charles Babbage Institute, University of Minnesota, quoted at http://encyclopedia2.thefreedictionary.com/Digital+literacy).

Later in the decade an attempt was made to equate computer literacy with programming ability:

It is reasonable to suggest that a peson who has written a computer program should be called literate in computing. This is an extremely elementary definition. Literacy is not fluency. (Nevison, 1976 quoted in Martin (2003:12)

In the 1980s applications available from the command line removed the need for users to be able to program the application in the first place. Views on what constituted ‘computer literacy’ changed as a result. The skills and attributes of a user who is said to be ‘computer literate,’ became no more tangible, however, and simply focused on the ability to use computer applications rather than the ability to program (Van Leeuwen, et al., in Cunningham, 2006:1580). On reflection, it is tempting to call the abilities that fell within the sphere of ‘computer literacy’ as competencies – as a collection of skills that can be measured using, for example, the European Computer Driving License (ECDL). By including the word ‘literacy,’ however, those unsure about the ‘brave new world’ of computers could be reassured that the digital frontier is not that different after all from the physical world with which they are familiar (Bigum, in Snyder (ed.) 2002:133). Literacy once again was used to try to convey and shape meaning from a rather nebulous and loosely-defined set of skills.

Martin (2003, quoted in Martin 2008:156-7) has identified conceptions of ‘computer literacy’ as passing through three phases. First came the Mastery phase which lasted up until the mid-1980s. In this phase the computer was perceived as ‘arcane and powerful’ and the emphasis was on programming and gaining control over it. This was followed by the Application phase from the mid-1980s up to the late 1990s. The coming of simple graphical interfaces such as Windows 3.1 allowed computers to be used by the masses. Computers began to be used as tools for education, work and leisure. This is the time when many certification schemes based on ‘IT competence’ began – including the ECDL. From the late 1990s onwards came the Reflective phase with the ‘awareness of the need for more critical, evaluative and reflective approaches.’ (Martin 2008:156-7) It is during this latter phase that the explosion of ‘new literacies’ occurred.

The main problem with computer literacy was the elision between ‘literacy’ as meaning (culturally-valued) knowledge and ‘literacy’as being bound up with the skills of reading and writing (Wiley, 1996 quoted in Holme, 2004:1-2). Procedural knowledge about how to use a computer was conflated with the ability to use a computer in creative and communicative activities. The assumption that using a computer to achieve specified ends constituted a literacy began to be questioned towards the end of the 1990s. A US National Council Report from 1999 questioned whether today’s ‘computer literacy’ would be enough in a world of rapid change:

Generally, ‘computer literacy’ has acquired a ‘skills’ connotation, implying competency with a few of today’s computer applications, such as word processing and e-mail. Literacy is too modest a goal in the presence of rapid change, because it lacks the necessary ‘staying power’. As the technology changes by leaps and bounds, existing skills become antiquated and there is no migration path to new skills. A better solution is for the individual to plan to adapt to changes in the technology. (quoted in Martin, 2003:16)

Literacy is seen as fixed entity under this conception, as a state rather than a process.

It became apparent that ‘definitions of computer literacy are often mutually contradictory’ (Talja, 2005 in Johnson, 2008:33), that ‘computer literacy’ might not ‘convey enough intellectual power to be likened to textual literacy,’ (diSessa, 2000:109), and with authors as early as 1993 talking of ‘the largely discredited term ‘computer literacy” (Bigum & Green, 1993:6). Theorists scrambled to define new and different terms. An explosion and proliferation of terms ranging from the obvious (‘digital literacy’) to the awkward (‘electracy’) occurred. At times, this seems to be as much to do with authors making their name known as provide a serious and lasting contribution to the literacy debate.

As the term ‘computer literacy’ began to lose credibility and the use of computers for communication became more mainstream the term ‘ICT literacy’ (standing for ‘Information Communications Technology’) became more commonplace. Whereas with ‘computer literacy’ and the dawn of GUIs the ‘encoding’ element of literacy had been lost, this began to be restored with ‘ICT literacy.’ The following definition from the US-based Educational Testing Service’s ICT Literacy Panel is typical:

ICT literacy is using digital technology, communications tools, and/or networks to access, manage, integrate, evaluate, and create information in order to function in a knowledge society. (ETS ICT Literacy Panel, 2002:2)

The skills outlined in this definition are more than merely procedural, they are conceptual. This leads to the question as to whether ICT literacy is an absolute term, ‘a measure of a person’s total functional skills in ICT’ or ‘a relative measure’ – there being ICT literacies, with individuals on separate scales (Oliver & Towers, 2000). Those who believe it to be an absolute term have suggested a three-stage process to become ICT literate. First comes the simple use of ICT (spreadsheets, word processing, etc.), followed by engagement with online communities, sending emails and browsing the internet. Finally comes engagement in elearning ‘using whatever systems are available’ (Cook & Smith, 2004). This definition of literacy is rather ‘tools-based’ and is analagous to specifying papyrus rolls, fountain pens or even sitting in a library on the classical definition. A particular literacy is seen as being reliant upon particular tools rather than involving a meta-level definition.

The problem is that, as with its predecessor term, ‘ICT literacy’ means different things to different groups of people. The European Commission, for example conceives of ICT literacy as ‘learning to operate… technology’ without it including any ‘higher-order skills such as knowing and understanding what it means to live in a digitalized and networked society.’ (Coutinho, 2007). This is direct opposition to the ETS definition above – demonstrating the fragmented and ambiguous nature of the term. Town (2003:53) sees ‘ICT literacy’ In the United Kingdom as

a particularly unfortunate elision’ as it ‘appears to imply inclusion of information literacy, but in fact is only a synonym for IT (or computer) literacy. Its use tends to obscure the fact that information literacy is a well developed concept separate from IT (information technology) literacy.

As Town goes on to note, this is not the case in non English-speaking countries.

(Please see http://dougbelshaw.com/thesis for references/bibliography. To avoid making a long post even longer, I shall post separately my section on ‘information literacy’) 🙂

Best of Belshaw (2009)

Last year I simply listed the ‘top’ 25 posts on this blog from the previous year in Top 25: the Best of Belshaw 2008. This year, I’ve gone one step further: I’ve created a book!

It’s available as a free download as an e-book or to purchase (as cost price) as a physical book from Lulu.com:

Best of Belshaw (2009)

And yes, it’s uncopyrighted as well. 🙂

Free copies

I’ve ordered 10 copies and am going to be giving them away for free to the following (UK-based) people who have helped and inspired me this year (in alphabetical order):

  1. Dai Barnes (for his help with EdTechRoundUp)
  2. Lisa Stevens (for being a cheerful, caring sort of person)
  3. Nick Dennis (for being my partner-in-crime on various projects)
  4. Stuart Ridout (for his help with the upcoming #movemeon book)
  5. Tom Barrett (for being a truly inspirational educator and collaborator)

Over and above these I’ll be giving some to members of my family, so I’ll have 2 spare to give away. If you’d like one of these, please leave a comment below explaining why!  Thanks to those who requested a copy in the comments below – the two that were up for grabs are going to Daniel Dainty & Julian Wood! :-p

Beyond Creative Commons: uncopyright.

CC badges

Background

Jonathan Lethem (via Harold Jarche):

Copyright is a “right” in no absolute sense; it is a government-granted monopoly on the use of creative results. So let’s try calling it that—not a right but a monopoly on use, a “usemonopoly”—and then consider how the rapacious expansion of monopoly rights has always been counter to the public interest…

Seth Godin:

So, how to protect your ideas in a world where ideas spread?

Don’t.

Instead, spread them. Build a reputation as someone who creates great ideas, sometimes on demand. Or as someone who can manipulate or build on your ideas better than a copycat can. Or use your ideas to earn a permission asset so you can build a relationship with people who are interested. Focus on being the best tailor with the sharpest scissors, not the litigant who sues any tailor who deigns to use a pair of scissors.

Leo Babauta:

This blog is Uncopyrighted. Its author, Leo Babauta, has released all claims on copyright and has put all the content of this blog into the public domain.

No permission is needed to copy, distribute, or modify the content of this site. Credit is appreciated but not required.

Terms and Conditions for Copying, Distribution and Modification

0. Do whatever you like.

Motivation

Be the change you want to see in the world (Gandhi)

Response

I’m here to change things. Do what you like with my stuff. It would be nice if you referenced where you get your ideas/resources from, but no longer necessary. From now on, my stuff is uncopyrighted.

CC BY laihiu

The problem with free stuff.

divieto?
Image: ‘divieto?

Background:

I like free stuff. I also like Open Source (OSS) stuff. I especially like FLOSS. OSS has a model that works:

In his 1997 essay The Cathedral and the Bazaar, open source evangelist Eric S. Raymond suggests a model for developing OSS known as the bazaar model. Raymond likens the development of software by traditional methodologies to building a cathedral, “carefully crafted by individual wizards or small bands of mages working in splendid isolation”. He suggests that all software should be developed using the bazaar style, which he described as “a great babbling bazaar of differing agendas and approaches.” (Wikipedia)

The trouble is, the only real ‘model’ that non-OSS developers have for making software freely available is freemium: making basic services free whilst charging for more advanced features.

The problem:

Educators get upset when services they’ve been using (for free) get shut down. That’s understandable.

Why are educators using these free, online tools? Because those that are provided for them don’t cut the mustard. Why aren’t they paying for the more advanced (premium) features? Because they would have to pay for them personally.

Solutions:

  1. Encourage/dictate that staff and students use only Open Source software (if a developer leaves, the software is still there and you can find/pay someone to develop it further)
  2. Give staff (and students?) a budget to spend on software/web apps (a bit like a personal version of the ill-fated eLearning Credits system in the UK)
  3. Have a backup plan (what other services could you migrate to if the worst came to the worst?)

Conclusion:

If you don’t pay for it (or, if ad-supported, click on the ads) don’t grumble if it’s not there tomorrow.

E-safety: the ‘googleability test’ (a suggestion).

The problem:

@4goggas (Kerry Turner)

Kerry Turner:

Any educator launching into the world of social media has to know its risks.

One evening, after reading several posts on Twitter, it was mentioned that school Acceptable User Policies were declaring that all contact with students on social media was to be avoided.

There are strong cases for and against its use. Most important is where the very public nature of social media spotlights professional conduct, where it is used as a vehicle for bullying, or presents us with evidence which we might need to flag up or report to a higher authority. Teachers worry that their natural way of conversing; expressing themselves after a frustrating day, or humorous posts about their personal life could compromise their position at work and result in a telling off from a superior. Yet we teach children to mind themselves online. Within reason, do we not need to consider the same? My belief is that as more students and NQT’s are educated about their use of social media, so the number of incidents which have resulted in censure will become less.

(my emphasis)

A solution?

IF “teacher” AND “http://www.google.com/search?&q=teacher” = “unprofessional” THEN “censure”

Goodness knows I’ve tried my best to put together some reasonable Acceptable Use Policies and ‘Digital Guidelines’ in the past. I think that we have to come to terms with the fact that people live increasingly large amounts of their lives connected via social media. So if you’re a teacher, use Twitter and occasionally swear, then protect your updates. If you don’t, and mind what you say, then as you were.

Using Google (or any search engine, for that matter) to search for an educator should bring up positive results on the first page. If it doesn’t, you’re doing something wrong.

After all, anyone can find out something negative or ‘unprofessional’ about a person if they do enough digging. :-p

#twitter365 (2009)

#twitter365 mosaic

The guidelines:

#twitter365 instructions

My response:

I know 2009 hasn’t finished yet, but Animoto (which I used to create the above video) has an image limit of 250.

View all of my #twitter365 photos and those of everyone who took part in the project. 🙂

The difference between visualizations and infographics.

All that glitters is not gold, and not everything that looks pretty is an infographic. For example here’s a visualization of my recent connections on Twitter using mentionmap:

(click to enlarge)

This looks good but isn’t very really very revealing. I’m well aware that I’ve been tweeting about tomorrow’s EdTechRoundUp TeachMeet (#TMETRU09) and with the people featured in orange. That’s why this is a visualization. It’s a pretty rendition of stuff I already knew.

TweetStats, however, produces something more revelatory:

(click to enlarge)

We’ll ignore the fact that the service has mis-reported early 2009. 😉

What’s interesting is that this reveals something. It shows when I tend to tweet, how often I’ve done so in various months. There are other graphs beside these that give other interesting details.

Herein lies the difference between visualizations (uses non-numerical, qualitative stuff to represent something already known) and infographics (uses quantitative data to show or reveal something new).

Wikipedia:

(inspired by posts at FlowingData & information aesthetics)

Social media, open standards & curmudgeonliness.

The problem:

Harold Jarche:

The increasing use of software as a service (SaaS)… is simple, easy and out of your control.

Luis Suarez:

I guess I could sum it up in one single sentence: “The more heavily involved I’m with the various social networking sites available out there, the more I heart my own… blogs“.

It all has got to do with something as important as protecting your identity, your brand… your personal image, your own self in various social software spaces that more and more we seem to keep losing control over, and with no remedy.

A proposed solution:

Harold Jarche:

Own your own data (CC-BY Harold Jarche)

I’ve decided to start the Curmudgeon’s Manifesto, which may serve as a call to arms to start dumping platforms that don’t understand how to play nice on the Internet. It’s our playground, and through our actions we get to set the rules of conduct.

Here’s my start (additions welcome):

  1. I will not use web services that hijack my data or that of my network.
  2. I will share openly on the Web and not constrain those with whom I share.
  3. I will not lead others into the temptation of using web services that do not respect privacy, re-use, open formats or exportable data.

An alternative solution:

Wikipedia:

An open standard is a standard that is publicly available and has various rights to use associated with it, and may also have various properties of how it was designed (e.g. open process).

The term “open standard” is sometimes coupled with “open source” with the idea that a standard is not truly open if it does not have a complete free/open source reference implementation available.

OpenSocial:

OpenSocial

Friends are fun, but they’re only on some websites. OpenSocial helps these sites share their social data with the web. Applications that use the OpenSocial APIs can be embedded within a social network itself, or access a site’s social data from anywhere on the web.

Harold Jarche:

Blog Central

One way to keep information accessible is to use an open, accessible, personal blog as the centre of your web presence.

OpenID:

OpenID is a decentralized standard, meaning it is not controlled by any one website or service provider. You control how much personal information you choose to share with websites that accept OpenIDs, and multiple OpenIDs can be used for different websites or purposes. If your email (Google, Yahoo, AOL), photo stream (Flickr) or blog (Blogger, WordPress, LiveJournal) serves as your primary online presence, OpenID allows you to use that portable identity across the web.

Conclusion:

Change the name of the Curmudgeon’s Manifesto to the Open Educators’ Manifesto (or similar). Back OpenID and OpenSocial. People like to sign up to positive-sounding things that cite big players or existing traction. I’m sure Chris Messina and other open (source/web) advocates have a take on this! 😀

On the glorious weirdness of connecting with people online.

It’s rare in this fast-paced world of Twitter and synchronous communications to come across high-quality reflections on how we connect online both professionally and personally. The video below, put together by D’Arcy Norman with contributions from the likes of Dean Shareski, Jim Groom and Barbara Ganley, is 15 minutes long. It’s absolutely worth your time – watch it now:

How do you connect to people online? from D’Arcy Norman on Vimeo.

Connecting with people online is, in a sense, a very strange experience. I can know a lot more about someone that I’ve never (and probably will never) meet in person who lives on the other side of the world than I ever will about a work colleague. In fact, as I’ve often commented to people when doing this, I think meeting people online actually leads to better relationships than if the situation is reversed.

For instance, this might sound silly but I’m always very careful never to wear my glasses when meeting people for the first time. Why? I don’t want them to pigeon-hole me. The next time they see me and I’ve got my contact lenses in I’m the guy ‘not wearing his glasses’. It’s a perception thing.

Meet people online, however, and it’s almost a window into their soul. One thing I find fascinating is people’s choice of avatar on Twitter. Some people choose to have an image of themselves to aid recognition when people meet them in person. Others change their avatar often. The people I’m interested in, though, are people like me: people who stick to one avatar and use it everywhere they go online. Presumably that’s because their avatar says something about them. Here’s a few by way of example from people in my Twitter network – what do you think their avatar and bio says about them?

@lisibo

@lisibo

Primary MFL teacher, ADE, eTwinning Ambassador, speaker and blogger, improving techie and generally enthusiastic gal who loves her iPhone

@durff

@durff

[no bio]

@gsiemens

@gsiemens

Changing the node set…

In the video embedded above, Dave Cormier talks about the ‘light’ connections we make with people and how these build up over time. I think this is what D’Arcy Norman (author of the video and, as of last month, no longer on Twitter) and Stephen Downes (a one-way user of Twitter) don’t get about social networking. Yes, 140 characters may be all too brief. But if I connect with you 50 times over the course of a few days, having had to craft each message to fit within the 140-character constraint, I bet we know each other a whole lot more than we did previously. And then you can go and look at my Flickr stream, my blog, etc. for more background. It’s not a replacement, it’s complementary.

Knowing an individual’s personal background and beliefs helps you judge when making decisions on whether to follow their advice and/or lead. But that’s not always best done only on the strength of meeting them face-to-face. I, for example, am much better (in terms of being coherent, understandable) when expressing myself using the written, rather than the spoken, word. Most connections online these days inhabit a world that is partly synchronous, partly asynchronous.* People may respond straight away to something you put online, or they may respond hours, days, weeks, months, or even years later. Because online content is an implicit open-ended invitation to give your opinion and make comment, you can do so at your leisure. This promotes thinking and drafting when blogging, and iterating towards your actual opinion when using tools such as Twitter.

People who haven’t seen videos or listened to podcasts in which I feature are often surprised when they meet me in person. For a start, I’m often younger than they thought (one person commented that they assumed, because of my avatar, that I was ‘a fat, balding, forty-something’ – thanks!) People also don’t tend to realise I have an, admittedly diminishing, Northumbrian accent – replete with the rolling R’s. I’m all for personality and individuality, but sometimes these two factors – my age and my accent – have proved to be barriers in the physical world. Not so online. 🙂

So an ode to the internet and the connections it makes. No, scratch that. An ode to the people who give up their time to connect to people. To those who make my life better by contributing, questioning and criticising my work and my thinking. It’s great to have and to be part of an active audience!

* There’s probably a word for this, but I don’t know what it is!

css.php