Log on:
Powered by Elgg

Feed detail

June 26, 2013

A quiet invasion

This image shows students taking notes. Whenever I show this image, I see many people in my audiences nodding in recognition. It seems to be a familiar, every day occurrence in some classrooms. In others it is rare or unlikely, because mobile phones are banned in some schools and colleges.  

Teachers might respond to this image in two ways. Either they can bemoan the fact that they have spent an enormous amount of time developing the content, only for students to capture its entirety in seconds with just a few simple button presses. They will ask whether this a trivialisation of their content, or an undesirable development that leads to superficial learning.  Alternatively they can celebrate that learners are adept enough at using their personal technologies to make learning easier and more productive for themselves. They can support the idea on the basis that most learners will use that content later for reflection.

We might argue that the first response is based on a model of learning that privileges the teacher as the arbiter of knowledge, whilst the second response represents an approach that places students at the centre of the learning process. The first response, I suggest, might indicate that those teachers feel learning should follow a prescribed track of content delivery that is assimilated and ultimately re-presented by learners in an acceptable format to demonstrate that they have internalised that content. The second response suggests to me that students can be freed up to capture content, archive and organise it, repurpose and develop that content to facilitate deeper learning experiences, and share it with their peers to widen its influence in a discursive environment. Which model are you most familiar with in your classroom?

Personal technologies are proliferating and they are multi-functional. They are quietly invading the classroom, in the bags and pockets of your students. Mobile phones can be used for many purposes, most of which can either support good learning or undermine it. As educators we each need to ask ourselves some serious questions, such as: What is my attitude to student use of technology in the classroom/learning space? Am I threatened by its use, or do I feel comfortable when students use their personal tools in the learning environment? The answer to these questions will possibly reveal to you not only your attitude to personal technology, but also how you view yourself as an educator and as a professional.

For some teachers, students recording lessons is anathema, whilst for others it is fully encouraged. There are many who are ambivalent. What about students Googling what you say during a lesson to check whether you are correct or accurate or telling the truth? Some teachers feel that this is an undermining or their authority or a challenge to their professionalism. Others see it as a liberating and democratic approach to learning, where the onus is on the student to check all facts and to be critical.  Some see the use of personal technologies in the classroom as distracting, disruptive and potentially dangerous. Others see them as an essential, and natural progression of contemporary learning culture. Which are you?

Photo by Lori Cullen

Creative Commons License
A quiet invasion by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.


June 21, 2013

An interview with Michael G Moore

I agreed to do a series of video interviews with notable scholars at the Annual EDEN Conference in Oslo earlier this month, and I was really spoilt for choice. There were so many prominent distance education and e-learning practitioners and theorists in attendance, it was a little difficult to know where to start. I managed to get Professor Michael G Moore (formerly of Madison-Wisconsin and Penn State Universities) to sit down and have a chat with me about his life in distance education, the history of the subject and his own experiences as a learner. In first met Michael at a conference in Ankara, Turkey in 1998, and our paths have crossed many times since. He is well known as one of the pioneers of distance education, one of the original team of academic consultants working with the British government to establish the Open University in the 1960s, and latterly, as the long serving founding editor of the American Journal of Distance Education. He is also credited with the theory of transactional distance, which has influenced many studies and publications on the topic over the last 30 years. Michael is quite simply an icon of distance education, and it is worth sitting for a few minutes and hearing what he has to say. Here's the video:



Creative Commons License
An interview with Michael G Moore by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.


June 20, 2013

Shifting sands

I have been blogging, writing and talking about our digital learning futures for some time. Although it is very difficult to predict the future, we are aware of the trends and can use these to detect where we may be heading, and that may take us in one or more directions, hence the plurality of 'futures'. Technology is one of the major drivers of change in our society, and it is easy to see where this is being integrated into schools, colleges and universities. Mostly it is integrated into classrooms, but it is largely left out of most curricula. One of the reasons for this, I believe, is that we are rooted in old practices and outdated frameworks which are in need of change. Seats of learning are notoriously resistant to change, but change is needed if progress in education is to be made. We now live on shifting sands. Allow me to elaborate:

Many of our pedagogical theories and much of our practice in higher education is grounded in, and has been derived from, a pre-digital era, when the lecturer or professor was central to the process of education, and where the classroom was the predominant place for learning to take place. Such approaches to pedagogy were rooted in the behaviourist model of psychology that privileged expert knowledge and formalised its transmission to novices. Education premised on this philosophy has been commonly referred to as the 'factory model' because of its parallels to industrialised working, which included batch-processing, rationalisation of resources, synchronised behaviour and homogenisation of product. In Henry Ford's car factory it was said, you could choose a car of any colour, as long as it was black.

In a time where education was being organised around experts (teachers) and students (novices) and where behaviour was required to be synchronised and content homogenised, such instructionalist approaches seemed to be relevant and appropriate. However, society moved on, the world of work changed, and the industrialised processes were replaced by knowledge working. And yet, although society has moved on, industrialised processes still persist in all sectors of education. It is more comfortable to stay the same, than it is to change. We still see large groups of students sitting in rows (often in auditoria and lecture theatres) struggling to take notes as a professor at the front hold forth on some theory or debate. Someone once remarked that the lecture is the most effective way to transfer a lecturer's notes into a student's notes without having to pass through two minds first. The old modes of teaching are outmoded. New modes will, and are replacing them.

In the digital age, where we are surrounded by new and emerging technologies, pedagogical theories and practices are in need of change. Technology is disrupting everything it touches, and education is no exception. Academic roles are changing. With the agenda for student centred learning, teaching staff should now act more as a supporting cast rather than as leading actors. Learning can take place anywhere, anytime, so formalised education contexts such as those seen in classrooms are going to become less important. Whilst learning is still learning, the pathways that lead us to that learning are radically changing, and there will need to be shifts in our perception and changes in our attitudes as a teaching profession, if we are to make sense of the seismic effects of new technologies.

In the keynote delivered to the INTO Staff Conference at Newcastle University on midsummer day - 21 June, 2013 - I will explore new technologies and new pedagogies, and offer some evaluation of some of the new digital age learning theories.

For example, social media is encouraging learners not only to discover existing knowledge, it is also enabling them to create, repurpose, organise and share new knowledge. Most of this activity is self organised. The self organising spaces that are proliferating on the web support the spread of these activities and in so doing help to sustain and grow the learning communities of practice that are now so vital to our society. Self organised spaces such as Wikipedia and YouTube have received bad press in the past, but increasingly, such huge, and ever growing repositories of knowledge and learning resources will become more important in education. We will also witness the growth of ambient and transient learning communities, which will spring up and thrive for a specific purpose, and then disappear again just as quickly. That they are ephemeral, from this perspective, will be less important than the impact such communities will have whilst they still exist.

Learning is changing, as are the roles of teachers. Technology will continue to disrupt our lives, and education will be conducted in many diverse ways, and in multiple contexts. Much learning will become informal, but as for formal learning - the expectations of young people will be different from the expectations we had when we were students in university. We can no longer afford to teach in the same ways as we were taught. And we can no longer avoid or ignore the technology wave that is driving these changes.

Photo by Steve Wheeler

Creative Commons License
Shifting sands by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.


June 15, 2013

Sugata Mitra - Charlatan or genius?

'It's quite fashionable to say that the education system is broken - it's not broken, it's wonderfully constructed. It's just that we don't need it any more. It's outdated.'

These are the radical thoughts of Professor Sugata Mitra, the doyen of innovative education and a figure of some controversy. Sugata Mitra recently won a $1 million TED prize to develop his ideas around his 'School in the Cloud' and building on his notion of minimally invasive education. Mitra told me the phrase 'minimally invasive education' came from his interest in medical procedures, where keyhole surgery was the least invasive method for surgical intervention, and caused the least amount of trauma.

What if, he argued, we could do the same thing with education? Well, many people now know the answer to that question, because he has disseminated his findings, and whatever the detractions and arguments against the Hole in the Wall projects, it has to be said that this bold experiment has some striking outcomes. Not least, claims Mitra, children when left to their own devices (in this case a computer screen and touch pad mouse in a wall) and when they are in small groups, children will teach themselves an extraordinary amount of new skills and knowledge. Whatever your views on this shade of auto-didacticism - and there have been some vociferous criticisms, do watch the video interview I did with him at this year's EDEN conference, and then make your own mind up. Is he a charlatan, or a genius?



Photo by Steve Wheeler

Creative Commons License
Sugata Mitra - Charlatan or genius? by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.


June 13, 2013

What is it about games...?

Just what is so special about games playing? Why is it so popular with all ages? And what is it that divides opinion so deeply about whether games have a place in the curriculum? Never has there been so much opportunity for schools, colleges and universities to capitalise and exploit the power of games to inspire, engage and enliven learning. And yet a straw poll taken amongst any group of teachers will reveal some strong polemic views. Some teachers extol the virtues of any game, and claim that all games have the potential to support learners as they acquire new skills, problem solving abilities and knowledge gain. Others argue equally strongly that games have no place in the classroom, because they are distracting, take students off task, and that there is no empirical evidence to support the argument that they contribute anything significant to the learning process. I have personally nailed my colours to the mast with several recent blog posts including The games we play and The future of gaming. Earlier today at the EDEN Conference, here in Oslo, I took part in a special session on games based learning.

I presented a paper co-written with one of my students, Lucy Kitching, (@lmkitching) which had originally been her third year research dissertation. Lucy's project set out to investigate whether games consoles have a positive impact on primary school girls' motivation and learning. She found that there are strong correlations between playing games consoles at home and the expectations of playing games in school for learning purposes. She also showed that there were (in the UK schools she studied) a significant shortfall in the provision of games consoles in comparison to those available in the average home. Whilst these findings might elicit a 'so what?' from some readers, the results of her study nevertheless indicate how current contexts can affect the motivation of learners to engage in learning with games consoles in formal education settings.

The second paper, presented by Marie Maertens (Katholieke Universiteit, Leuven, Belgium) showcased some research done into instructional design approaches in games. Marie spoke about adaptive and adaptable game-play and how games can be designed to respond to the individual needs and preferences of learners. She concluded that with the conceptual framework that the team have developed at KU Leuven, it is now possible for the creation of games that are based on valid measurements of student performance, and adaptive mechanisms that will respond to changes learners experience in their knowledge and skills more or less as they occur.

The final paper in the session was from Anne-Dominique Salamin (HES-SO, Switzerland) who continued the theme of adaptive games. Her paper, entitled 'New Tools for Students' also featured a live demonstration of a virtual world within which a variety of different decision making scenarios could be presented. Students were rewarded throughout the game by credits which they could then 'spend' to improve the graphical environment within which they were working. As an added touch, occasional and random insects walked across the screen to be targeted by the learner. One delegate tweeted during the event that they had never before witnessed such a bizarre occurrence at a conference - several professionals in unison shouting out 'kill that insect!'

The discussion that followed was productive and discursive, covering such areas as student expectations, constraints of games based learning, teacher and parent objections and design features and graphic rendering speeds in educational games.

Photo by Steve Wheeler

Creative Commons License
What is it about games...? by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.


Disruptive education

Earlier today at the EDEN Annual Conference at the University of Oslo, I interviewed June Breivik. June, who is at the Norwegian Business School will be presenting a keynote speech tomorrow at the event on 'Disruptive Education.' She has some strong views on how education needs to be changed, and believes that disruption is necessary to challenge the current paradigms of teaching in higher education. She sees the teacher role changing, and argues that technology is a driving force in that change. She is not technologically deterministic, but sees specific technologies - social media tools such as blogging and Twitter - as a means of liberating learners (and their teachers) into a new way of creative communication, and a new means of representing knowledge. June feels that students should take centre stage in the learning process and that teachers should cede the 'power over the learner' they have held for so long. She takes a positive view on the idea of the Flipped Classroom, and she also practices it in her own professional context. It is refreshing to know that June believes in walking the talk. The brief video interview below provides a fascinating primer for what she will present to the EDEN delegates tomorrow morning here in Oslo.



This post is mirrored on the official EDEN Conference blog

Creative Commons License
Disruptive education by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.


The kids are all right

This morning, here in Oslo, it was all about the kids. And why not? Too often we gather to discuss education, to expound on learning theories and to congratulate ourselves for our pedagogical prowess, and yet we miss the crucial element, the context which should be central to everything we do. The learner. Where is the learner voice at learning conferences? This was addressed today at EDEN, and I'm glad I was there to witness it. Kids are generally honest in their opinions, so if you want to know how well a school is doing, ask them. Ask them out of earshot of the teachers, and off the record, and they will tell you.

During the plenary opening session we heard from several frank young students about their experiences at school. It was not good news for the Norwegian education system. One student said 'I don't want my own children to experience what I went through at school'. Another opined 'The way school is structured doesn't enable students to find each other. It is too structured'. A third simply said quite bluntly 'school is boring'. I would image that the same sentiments could be expressed by school learners the world over. It's important to qualify these comments in the context of this year's EDEN Conference, the title of which is: ' The Joy of Learning: Enhancing Learning Experience, Improving Learning Quality.'

Why are students' experience so poor in most state run schools? Friedrich Nietzsche once suggested that education in state run schools is bad for the same reason cooking in large canteens lacks quality. One size patently does not fit all, and one of the students highlighted this, calling the standard student 'a myth'. He was right, and this message resonated around the auditorium. We have been teaching this way in schools for years, and it is about time it stopped, was their message. It was a refreshing start to the event, one that challenged us all, and made us think about the future of education. 

The rest of the conference will now have to - be required to - focus on what we need to do to change the current situation in so many schools around this planet. What will we do to make lessons 'less boring'? More importantly, what can be done to ensure that learners are more engaged? Children cannot vote with their feet in compulsory education, but they can vote with their minds. If we as teachers, are not in the game, and do not convey enthusiasm, inspiration and excitement to our students, how will they become engaged? How can we turn them on to studying serious subjects, if we cannot get ourselves enthusiastic in the first place? How will technology supplement and support these processes? Is learning changing, and if so, what will need to be the changes we will have to make to accommodate these changes? 

Read this blog over the next few days, and we as a blogging team will report to you from EDEN on what is being said, discussed, explored... and promised.... for the future of education.

Photo by Steve Wheeler

NB: This post is mirrored on the official EDEN Conference Blog

Creative Commons License
The kids are all right by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.


June 07, 2013

Digital beings

This is the third in my series of retrospective reviews on seminal learning and technology books. I have scoured my personal book library in search of a dozen books that have influenced my own thinking, and share a synopsis of their contents with you. Today's book recommendation is:

Nicholas Negroponte (1995) Being Digital. London: Hodder and Stoughton.

Nic Negroponte's Being Digital was groundbreaking. It was the first substantial mainstream book to explore the impact of digital technology on society, and demonstrated just how prescient and insightful Negroponte could be. As the man who was instrumental in setting up MIT Media Lab, and also the founding editor of Wired Magazine, much was expected of him, and he delivered, in spades.

He begins with a sort of apology, pointing out the irony (before his critics can) of publishing a book about digital technology as an analogue artifact. Later he labours the point that we live in a society created around atoms, when in fact most of our information is now in bits. This atoms and bits division amplifies and reveals the societal divides we see all around us. Such dichotomous contexts would necessitate the rethinking of social rules, relationships, privacy, ownership and copyright, legal rulings, and just about everything else we were comfortable with. He warns: 'As we interconnect ourselves, many of the values of a nation-state will give way to those of both larger and smaller electronic communities. We will socialize in digital neighborhoods in which physical space will be irrelevant and time will play a different role.' (p 7) In short, Negroponte was warning that disruptive change was imminent.

Negroponte's writing style is acerbic and witty in equal measure, and he has the knack of situating complex and potentially alien concepts into everyday context, so that his reader can apprehend their full meaning and implication. He ranges effortlessly through personal technologies, social spaces and the Internet of Things (writing about all of these technological advances while they were still either in their infancy, or just a gleam in the eyes of the Silicon Valley geeks. At the time of publication there was much discussion about the so called 'Negroponte Switch' - where he had predicted a switch between terrestrial and satellite distribution of content. What became the real Negroponte Switch for me however, was probably more important and useful, and less obscure for everyone. The switch I am referring to is the transition of control over content creation from the producer (film companies, record labels, broadcast media and publishers) to the consumer. Negroponte had already begun to think about how the Web might facilitate this sea change, when he remarked that there would be 'a change in the distribution of intelligence... from the transmitter to the receiver.' (p 19) Much of new Web content in recent years has indeed been generated by individuals using personal technologies. Wikipedia, YouTube and a host of other social media platforms are examples of this switch.

He poses the question how can technology make our lives better? Answering his own question he suggested 'creating computers to filter, sort, prioritize, and manage multimedia on our behalf - computers that read newspapers and look at television for us, and act as editors when we ask them to do so.' (p 20) With almost two decades of hindsight and technology advances, we can safely say that there are dozens of tools that can do just that for us, filtering, aggregating, curating content - but it is often the users and the community that do the editing and sorting for users to read. In being digital, we become digital beings. So perhaps we are half way there.

Negroponte also had much to say about design and multi-modal interfaces in Being Digital. He posited reasons why touch screen interfaces could be so versatile and intuitive, and cited some early research he had been involved in at MIT on the development of gestural tools. He believed then that the best kinds of designs incorporated human intuition into the interface, and that touch screens would become more or less common place. His remark here is revealing and prophetic: 'Wherever the computer may be, the most effective interface design results from combining the forces of sensory richness and machine intelligence.' (p 100)

When I first bought my copy of Being Digital, I read it from cover to cover in a single day. It is that sort of book, and even today, nearly 20 years on, it still has much to inform us about not only what we have already witnessed, but about what is yet to come. I can report that after enjoying dinner with Nic recently in London, he is as witty, engaging and entertaining in real life as his is within the pages of his book.

Oh - and I also made sure Nic signed my first edition copy.

Photo by Steve Wheeler

Creative Commons License
Digital beings by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.


June 05, 2013

Identity play

This is the second in my series of retrospective reviews on seminal  learning and technology books. I have scoured my personal book library in search of a dozen books that have influenced my own thinking, and share a synopsis of their contents with you. Today's book recommendation is: 

Sherry Turkle (1995) Life on Screen: Identity in the Age of the Internet. New York: Touchstone.

Life on Screen is a seminal book. It is now almost two decades old, and you may ask, how useful or relevant is a book on technology that was written in the last century? Although technology has moved on a pace - since the book was written we now have social media, mobile phones and touch screen tablet computers - many of Sherry Turkle's ideas still resonate with personal meaning. The reason for this is that she doesn't focus too much on transient effects such as what technology we use, she is more interested in pursuing the questions around human identity and how it can be influenced by technology. Sherry Turkle was, as Paul Judge has suggested, the Margaret Mead of cyberspace. Turkle was to all intents the first digital anthropologist, practicing participant observational research. Instead of sitting in mud huts as did Mead, Turkle immersed herself in the culture of the digitally mediated communication platforms of her day (in 1995 the predominant forms were MUDs - Multi User Domains, and MOOs - MUD Object Oriented). She lurked in the corners of the chat rooms and recorded the conversations she witnessed. In doing so, she created a rich tapestry of knowledge about personal lives, multiple contexts and the construction of public identities. We need to remember that in the mid-nineties, MUDs and MOOs were quite primitive in comparison to the media rich social networking tools of today, and relied mostly on textual communication. Hence, Turkle declares:

'On MUDs, one's body is represented by one's own textual description so the obese can be slender, the beautiful plain, the "nerdy" sophisticated. The anonymity of MUDs - one is known on a MUD only by the name of one's character or characters - gives people the chance to express multiple and often unexplored aspects of the self, to play with their identity and to try our new ones. MUDs make possible the creation of an identity so fluid and multiple that it strains the limits of the notion.' (p 12)

Turkle is perhaps one of the first authors to identify the fact that because human identity is fluid, manipulation of personae can be amplified and projected through the use of digital media. Today, even with the use of images, audio and video to supplement textual communication, people still have the capability to hide behind anonymity and also to manipulate their identity in many different ways. In some ways, she suggests, identity play can be therapeutic. More importantly, Turkle acknowledges that personal identity is often in the hands of individuals to make of what they will, a nod in the direction of the personalised spaces and digital presence construction that were to emerge a decade down the line. Turkle began to pose questions that were to gain a purchase on the rapid development and proliferation of computer mediated communication. She asked:

'Do our real-life selves learn lessons from our virtual personae?' (p 180) and documented how the early users of the Web struggled to come to terms with multiple contexts and manipulation of multiple identities with her ominous and prescient question '... are we watching a slow emergence of a new, more multiple style of thinking about the mind?' (p 180) In retrospect, Turkle was asking exactly the right questions, because evidence now exists that we do apply in real-life many of the lessons learnt on the Web, and we have as a post-industrial society come around to thinking about identity as multiple, and the individual as multi-tasking. We live in a world far richer in terms of social networking than Sherry Turkle did in the 1990s. And yet her studies into the use of these primitive versions of what we now call social media, revealed much of the truth about how we still engage today with each other online. She saw for example that we can easily deceived ourselves:

'... a virtual experience may be so compelling that we believe that within it we've achieved more than we have'. (p 238) This is clearly an experience we repeat time after time, as we spend endless hours immersed in chat, sharing and commenting, liking and favouriting, and ultimately engaging with our personal learning networks. How much of this could be achieved in real-life in less time, and more simply?

Turkle correctly identifies several facets of computers, each expressed as a metaphor, and each rings as true today as it did in 1995. She suggests that computers can be used 'as tool, as mirror, and as gateway to a world through the looking glass of the screen.' (p 267) Here, Turkle makes oblique reference to the symbolic interactionist work of Charles Cooley, who suggested that we see ourselves reflected in the 'looking glass' eyes of our interlocutors, and adjust our behaviour accordingly, simply to be accepted. It is probably true that fewer of us today tend to hide behind fake identities, and we are influenced by the responses, comments and retweets of our peer group. Perhaps that makes us more open and honest, but somehow it also reveals we are becoming increasingly naive.

Photo by Steve Wheeler

Creative Commons License
Identity play by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.


June 04, 2013

Power struggle

This is the first post in a new series. I'm going to present retrospective reviews of a dozen seminal books that I think should be on the reading list of anyone who wants to learn more about the social and cultural impact technology is having on learning and education. Here's the first, one for those who wish to understand how media and culture influence each other:

Henry Jenkins (2006) Convergence Culture. New York: New York University Press.

The strap line to Henry Jenkins' 2006 book Convergence Culture is 'where old and new media collide'. Collision may indeed be a most appropriate verb to apply, because as Jenkins reveals, there is enormous tension where old and new media intersect. It was clear even in 2006 that the old, closed and controlled media of radio, television, movie making, recording industry and the press were struggling to maintain their dominance against the new, open and democratic media found on the Internet. As we now know, that dominance has slipped, as billions swarm to participate in the new media, creating, remixing and sharing their content. The best that the old media can hope for is that there will still be a significant place for them alongside the new media that have pushed them to the sidelines. As Jenkins warned:

'Audiences, empowered by these new technologies, occupying a space at the intersection between old and new media, are demanding the right to participate within the culture. Producers who fail to make their peace with this new participatory culture will face a declining goodwill and diminishing revenues.' (p 24)

Jenkins takes the reader on a journey through popular culture and reveals the back stories behind some of the world's most successful blockbuster movies and TV shows, and how many have been defined through new technology. Jenkins also attempts to explain some of the complex issues of our time: the cultural shifts occurring where consumers and producers fight for control over a myriad disparate channels and platforms both in the mainstream media and online. He champions the democratisation of knowledge, and highlights the collective intelligence behind many of the dramatic rises in popular digital culture:

'What holds a collective intelligence together is not the possession of knowledge - which is relatively static, but the social process of acquiring knowledge - which is dynamic and participatory, continually testing and reaffirming the groups' social ties.' (p 54)

Inevitably, he argues, the people will win over the corporates. Power will pass from the corporate boardroom into the teenager's bedroom. We will see a decentralised media environment, free from network control. Control will pass to the communities who invest in knowledge building, and that knowledge will be defined by them. On Wikipedia he says: '...the process works. It works because more and more people are taking seriously their obligations as participants to the community as a whole... what once was taken for granted must now be articulated. What emerges might be called a moral economy of information: that is, a sense of mutual obligations and shared expectations about what constitutes good citizenship within a knowledge community.' (p 255)

This is a somewhat idealistic, but very well written book, presented in accessible language and flowing prose, but for me, the greatest aspect of this book is that it makes you think. It causes you to stand back and take a pause, to contemplate many of the phenomena we now take for granted, and that is often the attraction to revisiting some of the old, seminal texts that have defined our most recent history and cultural development.

Photo by Jonny Jelinek

Creative Commons License
Power struggle by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.


June 01, 2013

GRIN and bear it

The future may be bright. But the future may also be dark and disturbing. GRIN is an acronym that represents the four big emerging technologies of our times. Many believe them to be the defining technologies of the age and their speed of development will determine how far we decide to travel down the road of post-humanism to the point where humans are physically and intellectually enhanced - Human 2.0. In this short article I raise some of the issues for debate that will shape the future of humankind.

GRIN stands for Genetics, Robotics, Information technology, and Nanotechnology. Singularly, each technology is influential and each is developing at a rapid pace. In combination, GRIN technologies are advancing exponentially. Each technology has courted its fair share of controversy, particularly from the disquiet expressed about their ethical and moral implications, but also due to the unknown long term consequences they may bring. Nanotechnology is so new that no-one is really sure what will come of it (Bonsor and Strickland, 2011), whilst robotics is so mature a science that we now know almost too much about it. Robotics for example, is not governed by the failsafe laws predicted by Asimov (no robot shall do harm to a human being). The use of drones and other military applications of robotics have sadly demonstrated that machines are governed by humans, and robots enables human controllers to kill very efficiently. So concerning is its rapid progress that a United Nations expert has recently called for a halt to military robot development.

Genetic manipulation of the building blocks of life has spawned advances in genetic manipulation, germline cell therapy and genetically modified crops. None of these is without its dangers and difficulties, but each is also making ground on perennial problems such as inherited health problems, pest control and food production.

The potential for the combination of all of the GRIN technologies to advance human capabilities is great, but there are also several important issues with which we shall all be aware of. It is one thing for example, to create an artificial being such as a robot to perform mundane tasks, but what happens if it becomes sentient? Artificial humans have long been a popular topic of dystopic science fiction and popular culture. From the myths of ancient Greece, Frankenstein's monster, through to Terminator and I Robot, the warning is clear. Give a machine a mind and superior strength, and it is liable to turn upon its maker.

More likely to emerge first will be hybrid enhanced humans. What happens when we begin to merge humans with cybernetic systems to such a level that we create super-humans - true cyborgs? Even more likely, how will we justify tampering with genetics to the point where we are cloning versions of ourselves with the sole intent to harvest their DNA or organs for experimentation or for transplantation? All of these scenarios are possible, some are already happening, and some are in the process of being realised, but are they desirable or ethical? Do GRIN technologies give humankind a blank cheque to experiment to the point of no return? Do we have a license to combine man and machine to the point where we can no longer see the join? Are we able to reasonably measure the benefits of Human 2.0 against the potential dangers and threats, when in fact we are not yet able to predict outcomes and consequences?

I suspect that GRIN technologies will appear with increasing regularly in the news as our knowledge advances, to the point where hybrid, enhanced humans are commonplace, and machines achieve equivalent processing power to replicate human thought. At that point we will have achieved the 'Singularity'. According to prophets of the new age such as Ray Kurzweil, the Singularity is the imminent point in human development where technology is advancing so rapidly that humankind will no longer be able to comprehend it. Then we will see the age of the Transhuman - where biology is transcended by technology, and where GRIN technologies enhance humans beyond their natural capabilities. It will be the tipping point, and there will be no turning back.

Perhaps we have already gone too far. Genetically altered human beings already exist and have been with us for more than a decade. We are not talking here about test-tube babies. We are referring to the experiments first conducted by the Institute of Reproductive Medicine and Science at St Barnabas Hospital in New Jersey. Genetically modified babies have been born to women who had difficulty conceiving naturally. The Institute added extra DNA from a female donor egg before they were fertilised, and when the babies were born, they were found to have inherited the DNA from one man and two women (Hanlon, 2013). All well and good, you may think. 30 women who previously could not conceive now have teenage children. But what if scientists next decide to push the experiment further and create humans that have enhanced physical strength or intellectual capabilities? What will be the consequences of this act? Have we now forever artificially altered the genetic structure of the future human race? Will we have to simply GRIN and bear it?

Photo by Yorgos Nikas (Wellcome Images)

Creative Commons License
GRIN and bear it by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.


May 30, 2013

Are you buying steam?

In the July 2013 edition of Wired magazine, Jonathan Zittrain (Harvard Law Professor and author of The Future of the Internet) warns of the danger of censorship now we are moving to the cloud. Zittrain is worried about the possibilities of 'censoring, erasing, altering or restricting access to books', and argues that digital texts are 'increasingly coming under the control of distributors and other gatekeepers rather than readers or libraries.' He has a point. The provisionality of digital media - that is, the capability to change or edit an entire text instantly - and the cloud based storage that makes one version available for all to access but not to own in the physical sense, make it likely that the system could be abused. Purchasing and downloading a book for your e-reader, he warns, won't necessarily protect it from disappearing from the web, because unlike physical copies of books (or music), users only purchase a licence to read (or listen), not the entire work itself. Digital media is volatile, and is a likely to be withdrawn over copyright issues as it is prone to censorship.

So what is the future for digital text? Will there be a danger to our use e-books? Will we be put off by lack of protection of our purchases? Isn't purchasing an e-book a little like buying steam? Or is the next generation of readers already sold on the idea of digital only versions of books? They certainly save on physical storage space, but just how secure are they? How many will still subscribe to e-book purchasing if some of their texts disappear without warning, perhaps because an author has decided his work is flawed? What happens when a publisher discovers some books in their catalogue have been published erroneously, but are not yet in the public domain, and have to then withdraw them for legal reasons? The copy you have purchased will disappear from your Kindle. You bought the licence, not the book, remember? This actually happened in 2009, says Zittrain, when online retailer Amazon withdrew George Orwell's novel 1984 for that very reason.

Zittrain reserves most of his concerns about preserving the integrity of literature. What is to stop someone changing, adjusting or completely revising a text, when it is centralised and in digital format? he asks. He recommends that libraries could act as the arbiters of truth in this instance, monitoring and continually comparing their physical book stock against their digital counterparts to ensure no changes have illicitly taken place. That's a long shot though, and I wonder just how many libraries actually have the resources and staffing to be able to perform such a fastidious and time-intensive service?

I think the future of e-books is secure. Unlike some digital content, the e-reader isn't going to go away, and many millions worldwide have already subscribed to the concept. Opinion is still divided over which is preferable, reading from text or reading from a screen. Yet the biggest debate is probably yet to come - how to address the many legal, ethical and technical questions that remain about who owns the content you have purchased.

Photo from Wikimedia Commons

Creative Commons License
Are you buying steam? by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.


May 29, 2013

Global digital tribe

If you are immersed in technology mediated communication, there are no apparent barriers to membership of your community of practice. It is your personal network. It is your virtual community. Call it what you wish. To me it resembles a global digital tribe. It is tribal because the global online community exhibits many of the characteristics of traditional, territorial tribal practice. Whether or not we realise it, if we regularly use social media, we are members of the world wide digital tribe. Most anthropologists agree that a tribe is a (small) society that practices its own customs and culture, and that these define the tribe. Tribal identity in the age of the Web transcends ethnicity, traditional cultural expectations and geography (Wheeler, 2009). The global digital tribe has many smaller sub-sets we can call clans. These often exhibit customs and cultural values that separate them from activities that occur in 'real life' contexts. It would be ridiculous for example to 'poke' people in real life. People feel free to say much more on Twitter than they would dare to say to someone face to face.

There is a history behind this. Industrial society eroded the tribal gatherings of more primitive societies and redefined community. Vestiges of ancient tribal culture were often preserved only in large gatherings at public events such as football matches (where two opposing clans were likely to clash), and in smaller community gatherings seen for example in the ritualistic gatherings in the local pub or at a church service.

Post-industrial society saw the emergence of personal computers, the Web and a global communication network of mobile phones.  Social media, the most social of all modern technologies, provided us with shared virtual spaces where we could 'meet'. We now regularly find ourselves gathered around digital totems, where we tell our stories, give our warnings, share our ideas. We celebrate, commiserate and collaborate, and the digital totems we gather around are the virtual clannish spaces that facilitate these actions. We have, it seems come full circle, and 'community' has once again been redefined.

Facebook is currently the largest of the digital totems in the social media universe. The Facebook clan (a sub-set of the global digital tribe) attracts in excess of one billion tribal members, many of whom are busy telling their stories, celebrating success, sharing images and videos, showing their allegiance to their idols (liking a pop star page or a movie premier), playing social games and engaging in frivolous behaviour. There are many other smaller digital totems to choose from. One is the Flickrite clan, who gather to view and discuss photographs. Another is the Wikipedian clan, which exists to create knowledge. Yet another is the Google Hangouts clan, whose totem has a number of affordances to support all of the above tribal activities as well as the collaborative sharing of spaces, and video linking for synchronous communication that overcome physical constraints such as geographical distance. These are probably some of the richest digital totems in terms of their media communication capabilities, but there are many others, some global, some regional versions, some specific to particular languages, some aligned to particular interest groups or age ranges. Digital mediation and connection are becoming a societal norm for many in the Western industrialised world, transcending time and space constraints, providing communities with their social glue.

I once wrote: "Where digital communication has fractured the tyranny of distance and computers have become pervasive and ubiquitous, identification through digital mediation has become the new cultural capital" (Wheeler 2009, p 68). By this I meant that each individual who habituates into regular use of digital media enters into membership of their particular digital clans. We begin to identity ourselves through the relationships we have developed with others during the time we spend around our digital totems. Some we know in real life, others we only know through the digital connection. Either way, we tend to go where our friends are, and that changes over time. Membership of digital clans can be volatile, as has been demonstrated in the sharp rise and decline of once popular tribal online spaces such as Bebo or MySpace.

Whereas in real life space, we connect with each other through commonalities such as interest, age or gender ethnic similarities, and identify with like-minded others through our body language (postural echo) and clothing (costume echo), in tribal digital spaces, we express our affinity and interest by liking, favouriting, retweeting, poking, following and tagging. These are visible expressions of friendship and belonging that amplify their presence across the connected tribe, in a similar way that drumming, dancing and story telling have always been the cultural capital of tribal life around traditional totems. Internet memes - units of digital cultural knowledge - are very easily propagated and amplified across social media platforms. The transmission of such memes can become viral, with exponential spread of content, often initiated by the desire to share interesting content with the members of one's community. Such activities could be construed in the tribal context as 'marking of territory', or expression of ownership over artefacts (Wheeler and Keegan, 2009).

How can we harness the power of these tribal characteristics in organisational learning? Many of our digital tribal activities are performed on an informal basis. People use Facebook or Twitter because they want to, not because there is an organisational requirement. It is a strange conundrum that most organisational learning that is presented in digital format is delivered through Learning Management Systems (LMS) - many of which are so poorly designed or difficult to navigate that employees and students tend to avoid using them. My students tell me they use the LMS when they have to, but they use Facebook when they want to, because their tribe meets there. The tension between formal and informal learning sometimes can be reduced to such prosaic choices such as this.

Organisations should beware of silencing the voice of the individual. Many tribes 'embrace the status quo and drown out any tribe member who dares to question authority and the accepted order' (Godin, 2008, p 4). Social media tools offer members of the tribe, even a tribe as large as a multi-national company, the means with which to make their individual voices heard above the cacophony of daily working. Blogs, video sharing and other personalised technologies can enable anyone to contribute to the discourse.

Another thing we need to acknowledge is that a lot of learning is social. Employers need to see the opportunities that social media can present for employees to engage socially, wherever they are located. Further, because people learn from each other informally, informal meeting spaces should be seen as vital to the learning process, rather than undesirable distractions that need to be eradicated from the workplace. The most familiar social space, particularly for distributed work teams, is the social network. Next, learning and development leaders need to realise that not all learning can or should be provided from central sources. Increasingly, tacit forms of knowledge are important, and this can often best be acquired within the tribal community. Finally, learning should be situated. The best learning is acquired within the same spaces in which it will later be applied. For knowledge workers in particular, the place where learning will be applied is invariably online, in amongst the shared virtual spaces, and around the totems of the global digital tribe.

References

Godin, S. (2008) Tribes: We need you to lead us. London: Piatkus.
Wheeler, S. (2009) Digital tribes, virtual clans. In S. Wheeler (Ed.) Connected Minds, Emerging Cultures: Cybercultures in Online Learning. Charlotte, NC: Information Age.
Wheeler, S. and Keegan, H. (2009) Imagined worlds, emerging cultures.  In S. Wheeler (Ed.) Connected Minds, Emerging Cultures: Cybercultures in Online Learning. Charlotte, NC: Information Age.

Photo from Wikimedia Commons

Creative Commons License
Global digital tribe by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.


May 27, 2013

Beneath the facade...

If we scratch just below the surface of education, and we examine the nature of knowledge, we see an interesting challenge. It is increasingly apparent that knowledge as we 'know it' is inextricably linked to those who are in control of it. The knowledge gate-keepers have been in charge for some time, and knowledge is power. But all of this is already changing, as those beyond the inner circle begin to understand that through technology, they can create knowledge too. Our conceptions of knowledge could be said to be in a state of flux and uncertainty. If we accept that there is no monopoly anymore we need to ask some questions. In an age where anyone with an internet connection can create content, who now decides what we accept as a 'fact', and who is in control of our representations of reality?

Evidence from a number of sources has indicated that our conceptions of knowledge are indeed changing and that new and emerging technologies have a key role in the process (Guy, 2004; Lankshear and Knobel, 2006; Kop, 2007; Kress, 2009). Personalised tools lead to personalised learning, and the impact of this should not be underestimated. It is clear that tacit, informal knowledge resists explicit, formal knowledge. This is largely due to the fact that tacit knowledge includes the concepts, ideas and experiences that we have internalised personally, as opposed to the formalised knowledge we have learnt that is often decontextualised (Wheelahan, 2007). For many in today's technology rich, rapidly changing, networked society, personalised learning has acquired more value than anything that can be offered by organisations. Person-specific, individualised knowledge trumps generic knowledge that was suited to the needs of the industrial era.

Bates (2009) reinforced the view that generic, academic knowledge is no longer enough to meet the needs of the networked society:

"...it is not sufficient just to teach academic content (applied or not). It is equally important also to enable students to develop the ability to know how to find, analyse, organise and apply information/content within their professional and personal activities, to take responsibility for their own learning, and to be flexible and adaptable in developing new knowledge and skills. All this is needed because of the explosion in the quantity of knowledge in any professional field that makes it impossible to memorise or even be aware of all the developments that are happening in the field, and the need to keep up-to-date within the field after graduating."

Lyotard (1984) went further, suggesting that the boundaries between disciplines are eroding (many consider that they were always a false distinction anyway), and that traditional forms of knowledge transmission would be supplemented (and in some cases supplanted) by new methods of knowledge acquisition through technology. 30 years on, Lyotard's predictions are uncannily accurate. Citizen journalism for example, is rapidly becoming a key component of contemporary news reporting, appearing  in many major TV News channel broadcasts. Everyone who has a smart phone it seems, is a potential photo journalist. Wikipedia has for many replaced Encyclopaedia Britannica as the first port of call for knowledge acquisition. The fact that anyone with an internet connection can now contribute to knowledge is anathema to those who believe that knowledge generation should be the sole preserve of experts (Keen, 2007). Regardless of any such objections, user generated content is the dominant form of knowledge available on the web, and continues to grow. The checks and balances being implemented by the likes of Wikipedia are attempts to ensure that such knowledge is accurate and relevant. The users themselves will ensure that it is kept up to date. As Kop (2007) points out, 'knowledge is no longer transferred, but created and constructed', and that 'the validity of knowledge has become judged by the way it relates to the performance of society' (p. 193).

Are we witnessing the demise of the knowledge gate-keepers? Will we now see a decline in the Ivory Tower mentality that for centuries has held sway on learning for higher education? And how responsible is technology as a disruptor of this old paradigm of knowledge representation? Who is now in control of knowledge? We all are. What we do with that knowledge will determine the future of education.

References

Bates, T. (2009) Does technology change the nature of knowledge? Online Learning and Distance Learning Resources (Online publication)

Guy, T. (2004) Guess who's coming to dinner? cited in Kop, R. (2007) Blogs and wikis as disruptive technologies: Is it time for a new pedagogy? in M. Osbourne, M. Houston and N. Toman (Eds.) The Pedagogy of Lifelong Learning. London: Routledge.

Keen, A. (2007) The cult of the amateur: How today's Internet is killing our culture and assaulting our economy. London: Nicholas Brealey Publishing.

Kop, R. (2007) Blogs and wikis as disruptive technologies: Is it time for a new pedagogy? in M. Osbourne, M. Houston and N. Toman (Eds.) The Pedagogy of Lifelong Learning. London: Routledge.

Kress, G. (2009) Literacy in the New Media Age. London: Routledge.

Lankshear, C. and Knobel, M. (2006) New Literacies: Everyday practices and classroom learning. Maidenhead: Open University Press.

Lyotard, J. F. (1984) The post-modern condition: A report on knowledge. Manchester: Manchester University Press.

Wheelahan, L. (2007) What are the implications of an uncertain future for pedagogy, curriculum and qualifications, in M. Osbourne, M. Houston and N. Toman (Eds.) The Pedagogy of Lifelong Learning. London: Routledge.

Photo by Steve Wheeler

Creative Commons License
Beneath the facade... by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.


May 25, 2013

EDEN in the Land of the Midnight Sun

I'm very much looking forward to taking part in the EDEN Conference in Oslo between June 12-15 this year. Last year I was unable to attend EDEN, due to pressure of work. This time I made sure I would have the time, by blocking out my diary for the conference week. I'm glad I did. I have been given the honour of being invited to chair the keynote sessions of Sugata Mitra and Sir Ken Robinson, and to moderate a debate between them both on the future of education. Sir Ken will be speaking live via satellite from the USA and Sugata will be present at the conference in Norway.

Alongside a great team of people, I will also be live blogging and tweeting throughout the event, and may even get some time to interview some of the delegates, speakers and organisers. Watch this space to read my own blog posts, as well as those of the rest of the EDEN blogging team over the next few weeks in the lead up to the conference, and then during the event itself.

I'm looking forward to seeing Norway again, and my first visit to Oslo, especially during the time when the nights are at their lightest. It's always difficult to sleep in the land of the midnight sun, where in mid summer the night skies are so bright and the sun never seems to fully set. But I intend to spend as much time as possible awake anyway, so I can enjoy the conference and spend as much time learning as much as I can. If you are going to EDEN this year, I hope we bump into each other. If not, stay in touch through the EDEN conference website, to watch the live streaming, and watch this space for news, views and reports.

Photo by Marcus Ramberg on Fotopedia

Creative Commons License
Land of the midnight sun by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.


May 20, 2013

Learning theories for the digital age

I pointed out recently that many of the older theories of pedagogy were formulated in a pre-digital age. I blogged about some of the new theories that seem appropriate as explanatory frameworks for learning in a digital age. These included heutagogy, which describes a self-determined approach to learning, a new model of peer-peer learning known as paragogy, a post modernist 'rhizomatic' learning explanation, distributed learning and connectivist theory, and also a short essay on the digital natives/immigrants discourse. I questioned whether the old models are anachronistic.

Is it now time for these new theories to replace the old ones? Do we need them to describe and frame what is currently happening in an age where everyone is as connected as they wish to be, where social media are the new meeting places, and where mobile telephones are pervading every aspect of our lives. Are the old theories still adequate to describe the kinds of learning that we witness today in our hyper-connected world? Do Vygotsky's ZPD theory or Bruner's Scaffolding model still cut the mustard? Or can they work together with the new theories to provide us with a basis to understand what is happening. How can we for example describe learning activities such as blogging, social networking, crowd sourced learning, or user generated content such as Wikipedia and YouTube using older theories? How might we begin to understand the issues surrounding folksonomies, peer learning, or collaborative informal learning that seem to occur spontaneously, outside the classroom, spanning the entire globe - using old theories that were written to describe what happens in a classroom? Sure, I'm being deliberately provocative here, but it's needed discussion: Are the old models adequate, or are any of the new theories that are emerging more apposite, or more fit for purpose?

I finally got around to creating a slideshow that highlights some of the above issues, and features many of the theories I have previously written about. I stated in the presentation that theories are important for at least two reasons: Firstly they enables us to explain what we are seeing from a perspective. Secondly, they can inform and justify our professional practice as teachers. I suggested that although the new theories are useful, we still need to take transformational learning theories into account, and we need to reconsider some of the social learning theories that we are already familiar with. I created the slideshow below as an accompaniment to an invited webinar I presented for ELESIG - hosted by the University of Nottingham. I will be interested in your views.



Photo by Steve Wheeler

Creative Commons License
Learning theories for the digital age by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.


May 12, 2013

An audience with...

We learn best when we are fully involved in the process. John Dewey advocated 'learning by doing' and Seymour Papert called it 'learning by making'. These are theories that guide many educators today. Mindful of these theories, I have recently been working alongside students to encourage them to write for an audience. Nothing new in that, you may think. Normally, in higher education, students write for an audience of one. They write essays, projects and dissertations that will be read only by their tutor or marker. What would happen, I wondered, if I gave my students an audience of hundreds or even thousands? I did some early studies into the effects of this when I implemented a programme wide use of wikis in 2006. I published the results of the study in 2009, revealing that when they were aware of an audience, students raised their game. They improved their academic writing skills by concentrating on better sentence construction, grammatical accuracy, critical articulation of theory, ensuring that their referencing was accurate and the avoidance of plagiarism.

Earlier, in a 2008 article 'The Good, the Bad and the Wiki', my colleagues and I had reported that students became very protective over 'their content' in collaborative spaces such as wikis and took pride in presenting their ideas to a wider public audience. Subsequent implementation of blogging across whole classes revealed that students could find new ways of expressing their knowledge, and that audience dialogue was important for their development of further academic skills such as making arguments, engaging critically with theory and defending their position against attack. Clearly these are all very desirable graduate attributes, and needless to say, wikis and blogs now feature as essential 'learning by doing' tools in many of my undergraduate programmes.

I decided to take this concept a step further. Following the submission of some high quality third year degree projects, I approached students who had been graded at 80 per cent or higher, and encouraged them to develop their assignments for publication. I worked alongside them and we soon had our first success, when one of my BA students, Dan Kennedy was successful in publishing his work in an online open access journal called The Student Educator. The journal had been previously set up as a showcase for the best student writing in Plymouth University. Dan's piece was a well written, insightful article on the future of virtual learning environments and is well worth a read.

The next step was to push the idea further and encourage students to present their work in front of large live audiences such as conferences and symposia. The feedback and questions from audiences often add an extra dimension to the learning experience, because they highlight questions and issues the presenter may not previously have considered. I invited Dan to co-present with me at the ALT-C Conference in Manchester in 2009. He presented in front of almost 100 people, by far the largest audience he had spoken in front of at that time. I believe it was a transformational experience for him. It was at that point I decided I needed to find ways of encouraging more students to do similar things. I received some funding from a European project which enabled me to take students on overseas trips to work with our partner university students in Germany, Poland and Ireland. Over the three years of the Atlantis Project, 12 of my B.Ed students took part in presenting at research seminars in Darmstadt, Warsaw and Cork. Subsequently, each of them presented their work at the Plymouth Enhanced Learning Conferences in 2009, 2010 and 2011.

With the encouragement of my colleagues Peter Yeomans, Oliver Quinlan and myself, students also presented at a variety of Teachmeets, both in the South West, and further afield at large events such as the BETT Show in London. At Pelecon 2013 five more students presented their work. Two of those students - Becky Harcombe and Lucy Kitching - are currently working with me to prepare their assignments for submission to peer reviewed academic journals, with me acting as their second author. Lucy has also been successful in having her paper on Games Based Learning accepted for presentation at the EDEN Conference in Oslo, this coming June. I plan to support other students to achieve successful publication and conference presentations in the coming years.

You can imagine what such exposure can do to build students' confidence and how it can raise their professional profiles. Being able to include peer reviewed publications and international conference presentations on your CV when you apply for your first teaching job has to be a real advantage. Being able to evidence critical thinking, academic engagement at the highest level looks impressive on anyone's resume. It is also superb preparation for anyone who is about to embark on a career in education.

Photo by ClintJCL

Creative Commons License
An audience with... by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.


May 10, 2013

Just how far can they go?

Some while ago, I wrote a post entitled 'Lurking and Loafing' about students who are on the peripheral of learning, and whose activity is often to 'lurk' without appearing to directly or productively participate.  If you are involved in education you will know exactly what I mean. The silent student who sits in the corner, watching, but not overtly involved. Ask them a question and they stare back at you blankly, shrug, or declare that they don't know. It looks as though they really don't want to be there. Students who are on the periphery can also be an annoyance to their peers, especially where collaborative work is required, and they don't appear to pull their weight. In the wider world, this is referred to as social loafing. It occurs especially where there is a large number of people present, and where a diffusion of responsibility is easy to accept.

In my post, I discussed the challenge this presents to teachers, especially where it can be less noticeable in online environments. I particularly highlighted the concerns teachers have about students who don't seem to engage, and often appear to be socially loafing, when other students are working hard. Yet not everyone views it as problematic. Jean Lave and Etienne Wenger argued that some forms of peripheral activity can actually be legitimate participation, and can lead to deeper involvement within the core membership of the community over time. Through such legitimate peripheral participation they suggest, newcomers hold station in low risk and low profile positions on the edge, while they learn about the tasks, social rules and practices of their community, and eventually are drawn into the centre as productive members. But what if that doesn't happen? What if the students continue to lurk, fail to commit, and offer nothing of real substance, while their peers are working hard? Is this a problem? If so, how can it be resolved?

We know intuitively that people learn best when they really want to. Motivation is essential for the deepest and most engaged learning. Sometimes this motivation comes from outside (extrinsic) but more often than not it is intrinsic, an internal desire to better ourselves, gain more understanding, solve a problem, learn a new skill. The engagement of learning triangle below has several sources, but in its current presentation, I have added my own perspective around the use of digital media.
The Engagement Pyramid
(Adapted from Altimeter Group)
The pinnacle of engagement is clearly the ability to generate one's own content and then add value to it for others. Teaching others cannot be underestimated as a powerful motivator for many, and it is also essential that those who teach really know their field of expertise and have engaged deeply and critically with it. We learn by teaching, and if learners know they have to present something in front of their peers and tutors, they are prompted to prepare well and research widely. Encouraging students to share their content (videos, podcasts, blogs, etc) online for a potential global audience is a sobering but exciting challenge for them. Asking them to curate the content of others and add value to it can be even more challenging, but in doing so, they will usually read more widely, and are then in a position to assimilate multiple perspectives.

Engaging students through social media and mobile technology taps into an area that many are knowledgeable. Their familiarity with using these tools can often be just the spur they need to engage more deeply in their learning.

Photo by Brian Auer

Creative Commons License
Just how far can they go? by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.


May 09, 2013

Self organised learning spaces

"I never teach my students. I only provide them with the conditions in which they can learn." - Albert Einstein

The social web is replete with self-organising spaces. Take Wikipedia for example. It is now the largest single repository of knowledge on the planet and continues to grow with over 4.2 million articles in English, and many more in other languages. Currently, 750 new pages are added each day on just about every topic known to humanity. It's the first port of call for many web users when they wish to check a fact or statistic. Who creates and maintains this huge, ever expanding repository of knowledge? We do. You and I. Us and an army of similar minded volunteers who love learning, and want to share their knowledge. All Wikipedia has done to promote the vast ever expanding storehouse of knowledge, is to provide the environment within which it all takes place. And that should give all of use some clues as to how to facilitate self-organising learning spaces.

Self organised learning - where learners control their own pace and space or learning, and often decide on what content they wish to consume - is a growing force in education. From individual students learning informally by browsing on their handhelds, to small flipped classrooms, to vast groups of learners following a programme of study on massive online open courses (MOOCs), education is changing to become learner driven. Yet many academics and teachers struggle with the concept of self-organised learning. Often this is because it is something of an alien concept to them. When they were in school, college or university, they were probably required to attend lectures and classroom teaching sessions where they were expected to 'receive knowledge' and then go away and attempt to make sense of it in an essay, project or examination. Clearly, the temptation is to perpetuate this kind of didactic pedagogy approach when one is expected to teach. Many however, are breaking out of this mould, and are launching into new kinds of pedagogy which enable learners to take control, and where teachers are another resource to be called upon when needed.

Wikipedia facilitates knowledge generation, sharing, remixing and repurposing because it is an open, accessible space where everyone can participate. It may be error ridden, but these errors are usually addressed and content revised, deleted or extended accordingly, and often within a short space of time. Yes, there will be disputes, just as there are 'edit wars' within Wikipedia, but hopefully, learners will also learn from this how to gain confidence in their own abilities, how to defend their positions and how to think critically. If this kind of learning occurs within a psychologically safe environment which is blame free, success can be achieved. Self-organised learning spaces should be similarly founded on psychologically safe principles, where if errors are made, those who made them can learn and adjust as they discover the 'correct approach' or the 'right answer'.

Working within self-organised communities enables a vast amount of learning to take place, but it also allows for individual differences and personalities to flourish. Teachers who adopt the approach of facilitating self organised learning must be willing to allow learners to take their own directions and find their own levels. Exploration, experimentation, taking risks, asking 'what if?' questions and making errors, are all essential elements of self-organised learning. However, probably the most important component is the ability of the learners themselves to direct their own learning, and to be able to call upon the resources they need, when they need them. We can learn a lot from Wikipedia, and not just from the knowledge it contains.

Photo from Fotopedia by William Murphy

Creative Commons License
Self organised learning spaces by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.


May 07, 2013

Twitter and the death of distance

It's something we already know, or at least have suspected for a long time. Social media sites such as Twitter span huge distances to connect people around the world. My own father, now 84 years old, started a Facebook account so he could keep in touch with distant relatives in such places as New Zealand and Australia. He's having a whale of a time. There are many stories of people developing and sustaining friendship, or even romance and eventual marriage, after 'meeting' on a social media site. I have co-authored several books with colleagues whom I have never met, where social media tools were used to co-create the content across the distance. The stories go on and on.

Many of us regularly communicate with multiple Twitter and Facebook friends and acquaintances instantaneously even though they may be in another country. Sometimes those friends can be several time zones away. It doesn't seem to matter that much any more where people are located. It's hard to believe that not so long ago, (pre-internet, pre-World Wide Web), this would have been nigh on impossible. We now take it for granted that we can upload and share photos and videos, text chat in real time, see and hear each other, or play games together in the same online social space, across vast distances.

Distance does not seem to be an issue any more, and research is unearthing evidence for what was already common knowledge. A recent study from the University of Illinois reports on the use of the tweets sent by 70 million Twitter users found that on average, tweets and retweets were sent by people located more than 750 miles away from the message originators. This study, published in open access online journal First Monday is intriguingly titled: Mapping the Global Twitter Heartbeat: The Geography of Twitter. Here is the abstract:

In just under seven years, Twitter has grown to count nearly three percent of the entire global population among its active users who have sent more than 170 billion 140–character messages. Today the service plays such a significant role in American culture that the Library of Congress has assembled a permanent archive of the site back to its first tweet, updated daily. With its open API, Twitter has become one of the most popular data sources for social research, yet the majority of the literature has focused on it as a text or network graph source, with only limited efforts to date focusing exclusively on the geography of Twitter, assessing the various sources of geographic information on the service and their accuracy. More than three percent of all tweets are found to have native location information available, while a naive geocoder based on a simple major cities gazetteer and relying on the user–provided Location and Profile fields is able to geolocate more than a third of all tweets with high accuracy when measured against the GPS–based baseline. Geographic proximity is found to play a minimal role both in who users communicate with and what they communicate about, providing evidence that social media is shifting the communicative landscape.

One of the key findings of the research is that Twitter is transforming our conceptions of communication at a global level. The study also confirms at least two other things: Not only are we now a virtual, distributed society, we are also increasingly comfortable with the fact that content, especially knowledge, can be disseminated around the world, via huge networks of users, in seconds. I suspect that the 'ripple effect', where content is spread and amplified through sub-groups across networks, is only just beginning to gather pace and will continue to exponentially grow as more and more people start social media accounts, and then begin to connect with others across the globe.  What this will do for massive online courses and other forms of distance education remains to be seen.

Photo by Artemis Crow

Creative Commons License
Twitter and the death of distance by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.


May 06, 2013

Follow you, follow me

A recent longitudinal study may provide clues about why some have more Twitter followers than others. The study, entitled A Longitudinal Study of Follow Predictors on Twitter analysed the social behaviour and message content against follower growth for more than 500 Twitter users over a 15 month period. The research concludes that if you want to attract more followers, your content has to be good quality, and how you say it also matters.

Here's a breakdown of the three main findings: Firstly, message content significantly impacts audience growth. It was found that negative sentiments (comments and content) were less likely to attract more followers than positive sentiment (this result is probably a 'no brainer', but it's useful to see the statistics). The authors speculated that this was probably because Twitter exhibits weaker social ties than other social networks such as Facebook, and therefore many Twitter users are less likely to want to connect with relative strangers who transmit negativity.

Secondly, social behaviour choices can dramatically affect network growth.Who a user follows and the profile cues they make available (Twitter identity, personal details disclosed and avatar) could increase or decrease their social capital. If you stay an egg all your Twitter life, don't expect too many followers.

Thirdly, variables related to network structure are useful predictors of audience growth. Connecting ties must exist between users and their audience, so if you want others to follow you, you may need to follow a few back to keep their interest. However, this finding should not be privileged above the first two findings, the authors say.

One area the authors also briefly discuss, as an exception to the above findings, is the celebrity effect. Many people will follow their favourite celebrities regardless of the content that is presented. It seems that all it takes to gain a huge following is to be a popular film star, rock musician or author. For the rest of us, content, connection and presentation style are everything, it seems.

The study is worth reading if you have any interest in how Twitter works at a more dynamic, macro level.

Image by AJ Cann

Creative Commons License
Follow you, follow me by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.


May 02, 2013

Are you a meerkat or an ostrich?

Are you a meerkat or an ostrich? Why am I asking you this strange question? Read on...

Etienne Wenger recently declared: 'If any institutions are going to help learners with the real challenges they face...(they) will have to shift their focus from imparting curriculum to supporting the negotiation of productive identities through landscapes of practice' (Wenger, 2010).

We live in uncertain times, where we cannot be sure how the economy is going to perform today, let alone predict what kind of jobs there will be for students when they graduate in a few years time. How can we prepare students for a world of work that doesn't yet exist? How can we help learners to ready themselves for employment that is shifting like the sand, and where many of the jobs they will be applying for when they leave university probably don't exist yet? It's a conundrum many faculty and lecturers are wrestling with, and one which many others are ignoring in the hope that the problem will simply go away. Whether we are meerkats, looking out and anticipating the challenges, or ostriches burying our heads in the sand, the challenge remains, and it is growing stronger.

Wenger may have given us clues to what we should do. Stop emphasising the teaching of curriculum subjects, and spend less time transmitting knowledge, facts and structured content that can often go quickly out of date. It means breaking down the traditional silos of division and opening up classrooms and lecture halls to other possibilities beyond passive reception of content. It requires that we should begin to break down the false boundaries between subjects, developing lifelong learners who will be able to adapt quickly and flexibly to changing contexts, unfamiliar problems and new challenges as they arise. This means creating environments in which students can learn to problem solve, negotiate meaning, develop their digital identities, and practice new communication methods through a variety of different platforms and media. It means exposing them to experiences where they can practice creating and sharing their own content, remixing existing content, reflecting on their practice, thinking and arguing critically.

All of these are skills and competencies graduates will need if they are to face a brave new world where nothing is yet clearly defined and where everything is up for negotiation. Such flexible, learner centred activities will be key to meeting any possible number of futures that may be out there. MOOCs and flipped classrooms are just the start of the movement to create this shift in education. They will not be the only methods employed. We can only begin to guess at what will happen next as education begins to evolve to its next level. Will you be looking out to see what is on the horizon, or will you hide your head in the sand?

Reference
Wenger, E. (2010) Knowledgeability in Landscape Practice, in S. de Freitas and J. Jameson (eds.) The eLearning Reader.

Photo by Ray Morris

Creative Commons License
Meerkats and ostriches by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.


April 30, 2013

Digital me, digital you

Increasingly many of us are spending more of our time online, creating, repurposing and sharing content, searching and consuming content and communicating with others. All of these activities leave behind a trail, a digital footprint, a record of where we have been and what we have done. More significantly, in psychological terms, we are developing our personal digital presences, and modifying our digital profiles. These are some of the essential elements that constitute an individual's digital identity - who we are in a variety of contexts in digital environments - how we present ourselves and manage our impressions in our digital lives. A useful model that can be applied as a framework to aid our understanding of the interaction between individuals, tools and technologies, other people and the wider learning ecosystem, is the model developed by Engestrom and his colleagues (building on the work of Leont'ev, Rubinstein and other social constructivist theorists) which we now know as Activity Theory. My version of the model, which I have used to describe the essential elements and actions that help to build a digital identity are shown in the image above, overlaid against the original model.

I thought it useful to apply some statements by leading theorists to a few of the pathways/relationships within the model. For example, Marshall McLuhan made specific reference to the relationship between people and technology when he declared 'we shape our tools, and thereafter, our tools shape us.' The symbolic Interactionist theorist Charles Cooley saw the impact of community upon the behaviour of individuals when he wrote 'We see ourselves reflected in the eyes of others'. Clearly, digital identity is a complex proposition to talk about. The relationships between the elements in the Activity model are not as clear cut as the diagram might make us believe. The slideshow below, which was presented today at a research seminar at the University of Reading might shed some more light on the question of how we formulate, maintain and modify our digital identities, but there is much research still to do before we can better understand who we really are when we venture online.



Creative Commons License
Digital me, digital you by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.


April 26, 2013

Turning over a new leaf

When the Chartered Institute of Personnel and Development invited me to speak at their annual HRD event at London's Olympia I was delighted. I wasn't so keen when they asked me to supply my slides many weeks in advance. The organisers wanted them so they could produce delegate packs that included paper versions of my slide show. I could understand their eagerness, but I hesitated. I don't normally send my slides too far in advance of a presentation for three reasons - firstly I don't think it's a good use of resources to produce paper based slides and secondly, it's not good pedagogical practice because looking down at a paper rendition of slides you are about to see can be distracting for delegates when they should be engaging with the front-of-house presentation. It's even more confusing if the slides don't match the presentation.

Which brings me to my third point - my slide decks change almost by the day. Learning technology is a fast moving topic. It changes by rapidly, and there are always new things that can be added. So I sent a provisional set of slides to the organisers about 2 weeks before the event, with the warning to them that these would not be the final production. I duly arrived on the day with a drastically updated set, as I had anticipated I would. Even in the middle of my presentation (during a brief break in the workshop) I was still tinkering, adding an extra few slides which I had seen on Jane Hart's blog that morning that were very relevant to my presentation (such as the chart above). She talks about the five key ways knowledge workers like to learn today. You can see the slide set below, with Jane's research report included in the middle, and I would also encourage you go to her blog to read the full report.



The message I definitely had to include from Jane's work based research study is that when implementing elearning into any company, one of the most important things to avoid is simply 'shovelling across content' from traditional classroom based learning into a digital medium. Electronic page-turning, she argues, just isn't enough, and of course, she is right. And yet the practice still persists, either because managers consider it to be cost effective, or they don't know that there are better, more effective ways to present digital learning. Page turning approaches may be cheap, but they are actually false economy, because they simply turn employees off. The result is that employees fail to learn what the company has paid for them to learn. This kind of thinking was voiced during the session. Someone mentioned that they thought elearning would be attractive to many companies because it would 'save money'. The ensuing discussion quickly demolished this notion - it is rare indeed that a company actually save money by implementing elearning, and surpringly, cost saving should not be a consideration when elearning is being implemented. More important reasons for implementing elearning are that it provides learning opportunities for employees who would otherwise not have a chance to learn, and it offers flexibility of pace and style.  As Jane Hart argues in her report, one of the things most knowledge workers desire is to be able to learn flexibly, whilst remaining within the flow of their work, and preferably without leaving the work space to do so.

We had a good time during the session at Olympia. I guess though that some of the delegates were a little confused that the slides they had in their pack were not identical to the ones they saw on the screen. They were engaged in their own version of page turning, and perhaps some of them benefited, because I saw them scribbling notes on the slide images. Yet there are much better ways of presenting learning than simply sequencing content in a linear manner. I am sure that most were more engaged with the discussion we enjoyed during the session than they were with the linear content provided as slide handouts. The same principle applies to elearning. I think it's about time organisations and managers began to wake up and realise that digital learning is different to traditional learning. It's about time they all turned over a new leaf.

Image courtesy of Jane Hart

Creative Commons License
Turning over a new leaf by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.


April 23, 2013

Freedom to imagine

Sir Ken Robinson has a lot to say about creativity and learning. The two are, or should be, inextricably linked. One of his remarks is that  imagination needs to emerge as creativity, as a natural process. He goes on to argue that traditional school systems constrain or even negate this process. He argues that this is largely due to the mechanistic, industrialised approach schools have taken for many years. Other constraints are the logistical problems such as lack of time or space for play, exploration and discovery that are familiar in many schools. All children have great imaginative power, but gradually this ability to imagine can be eroded as they are processed through formal education systems. In short, Robinson believes school is killing creativity. But this may all be about to change. The teacher led nature of traditional education is being challenged, not only ideologically, but also as a result of the pervasiveness of new technologies. 

One question that is often asked within this discourse, is whether technology can actually improve education by providing learners with opportunities to be creative. 

For me, the answer is yes, in certain circumstances. 

Give a child a games console and he will play a game on it. He will have great fun, but will he learn anything significant? Will he be creative? It depends of course on what the game is, whether it is linked to authentic learning, and what specialised support is on offer from trained educators. It also depends on whether he feels he is in an environment where he can take risks, and express himself freely. The same applies to any technology withing any formal learning context. In informal contexts, children are very expressive and creative through their technology. 

For formalised learning, students require scaffolding, but the scaffolding does not necessarily have to take the form of a 'knowledgeable other person' as Vygotsky suggested. Today, technology, particularly technology that is personal and portable, can provide similar forms of scaffolding for learning. Increasingly, teachers are adopting roles as support for learning, and as facilitators of learning spaces. For creativity to be maximised, learners need to be free to imagine, discover, explore and play in spaces where they are psychologically safe. If they make mistakes, they will be able to learn from these, rather than being punished for 'getting it wrong'. 

Give a child a camera and she will be creative.... especially if she knows what she is aiming at.

Photo by Steve Wheeler

Creative Commons License
Freedom to imagine by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.


<< Back Next >>