Log on:
Powered by Elgg

Feed detail

February 09, 2013

Being Negroponte

'Learning when there is no school'
In 1995 I read a little black paperback book that changed my view on the world. The title of the book was 'Being Digital' and the author was Nicholas Negroponte. Several key elements of Negroponte's book stood out for me and challenged my thinking. Firstly, he talks of a time when all media will be transformed from atoms into bits. This premise, written in the middle of the 90s, looked forwards to a time when newspapers, movies, music, television, photography, and a host of other media would reside exclusively within the digital domain. The repercussions would be that large businesses who relied on shipping 'atoms' would go out of business, whilst those who sent bits would thrive. Negroponte is a gentleman and doesn't have the hubris to declare 'I told you so', but a quick look around at the world of business will tell you that he was right. Large photographic companies, the music industry, book and newspaper publishers, high street chain stores and even the mighty Hollywood film industry are struggling to adapt, survive or maintain their preeminence in a world where everyone has a mobile phone with a camera, downloads of e-books exceed print based sales, iTunes is the favourite method of purchasing your favourite music, movies can be streamed online, and people are migrating en masse to online stores such as Amazon. Negroponte's vision was prescient indeed, and we ignore the man's ideas at our peril.

Secondly, Being Digital featured further predictions about touch screen computers, artificial intelligence and convergent technologies such as TVs and computers combining their functionality. The entire book is crammed full of these instances, and it is not hard to see why it had such a huge impact on me and many others like me almost 20 years ago.

It was a delight and a privilege to be invited to meet Nic Negroponte over dinner in the run up to the Learning Technologies Conference. I sat and chatted with him for more than two hours as he regailed me and my co-diners with story after story of his many exploits. Negroponte established the now legendary MIT Media Lab, and was also founder of Wired Magazine. I first became aware of his work by reading his then regular column. He is well connected too. Close friend and LOGO inventor Seymour Papert married author and cyberspace researcher Sherry Turkle in the living room of Negroponte's home. Negroponte and his then wife met with Alan Turing's mother and brother, and were given all his 'baby photographs'. He worked alongside legends such as artificial intelligence pioneer Marvin Minsky and in so doing, became something of a legend himself. In his opening keynote speech at Learning Technologies, Negroponte stalked across the stage reminding his audience that it is a big mistake to assume that knowing is synonymous with learning. 'We know that a vast recall of facts is not a measure of understanding,' he declared, 'and yet we subject kids in school to constant memorising to pass tests.' His answer? What we need to do in schools, he said, was to find ways to measure curioisty, creativity, imagination and passion, as well as the ability to view things from multiple perspectives.

Negroponte is now celebrated for his high impact initiative to provide children in poor countries to access learning through laptop computers. His One Laptop Per Child project has now given children from Ramallah to Rio access to the learning they previously never had a hope of having. The total number of laptop computers distributed through the 1LPC project now exceeds 2.5 million in 40 countries, and there are many heart warming stories to be told. Children are now teaching their own parents how to read, using the laptops as tools. In Ethiopia, over 5000 children are learning to write computer programs using Squeak. Plans to begin distribution of touch screen tablets are well underway, and it won't be long before we are talking about One Tablet Per Child. All of this is run on a charity basis, and is philanthropic to the core, with supporters including the Bill Gates Foundation and Salman Kahn's Academy.

If we have learnt one thing from the 1LPC project, says Negroponte, it is that children learn a great deal on their own, with little or no help from others. This echoes the work of pioneers such as Sugata Mitra, whose 'minimally invasive education' was demonstrated by the 'Hole in the Wall' experiments. Negroponte said that Mitra is now working with him and others at MIT - they have joined forces to advance these projects further. Children have a natural curiosity, Negroponte is at pains to point out, and discovering, making and sharing things is second nature to them. We should nurture these characteristics he warns, rather than stifling it in rigid school systems.

Photo by Steve Wheeler

Creative Commons License
Being Negroponte by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.


Three things

There are three things we need to know about learning for this generation. The first is that learning needs to be personalised. As I argued in a previous post, learning must be differentiated, because one size does not fit all, and standardised curricula and testing are not fit for purpose in the 21st Century. Personal learning is unique to each learner. The tools and devices students choose, and the pathways they decide to take are in many ways beginning to challenge the synchronised and homogenised approaches we still practice in schools, universities and organisations.

Secondly, learning needs to be social. Much of what we learn comes from contact and communication with others. Increasingly, such contact and communication is mediated through technology, and social media tools are ideal for this purpose. The celebrated Russian psychologist Lev Vygotskii proposed the idea of learning being extended when children are mentored by a knowledgeable other person. His Zone of Proximal Development theory has been central to our understanding of how we learn in social contexts. Yet in recent years, with the proliferation and equalisation of knowledge and the strengthening of social connections through digital media, new theories such as connectivism and paragogy have emerged to challenge the central place of ZPD in contemporary pedagogical theory. We need to ask whether we now need knowledgeable others such as subject experts to help us extend our learning when we have all knowledge at our fingertips. Now many learners are exploiting the power of social media to build and engage with equals in personal learning networks.

Thirdly, learning needs to be globalised. As we develop personal expertise, and begin to practice it in applied contexts, we need to connect with global communities. Students who share their content online can reach a worldwide audience who can act as a peer network to provide constructive feedback. Teachers can crowd-source their ideas and share their content in professional forums and global learning collectives, or harness the power of social media to access thought leaders in their particular field of expertise. Scholars who are not connected into the global community are increasingly isolated and will in time be left behind as the world of education advances ever onward.

Photo by Steve Wheeler

Creative Commons License
Three things by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.


February 08, 2013

Changing the world

It's not often you get to talk with someone who has changed the world. That's exactly what I did this week in a glittering lounge in the Carlton Ritz Hotel, when I sat down with Steve Wozniak, co-founder (with Steve Jobs) of Apple. Wozniak designed the first Apple computer, and together with Jobs, set in motion a company that continues this day to mould our use of digital technology. If you use an iPad, iPod or iPhone, if you have an Apple Mac computer or laptop of any sort, you undoubtedly have Steve Wozniak to thank. Apple, and its co-founder Wozniak have shaped our desires and crystallised our dreams with innovation after innovation. Steve Jobs may be no longer with us, but Steve Wozniak - 'Woz' - lives on, larger than life, and as effusive and buoyant as ever about the future of technology and its role in education.

This week, Woz and I were both invited speakers at the 3rd International Conference on eLearning and Distance Education in Riyadh, Saudi Arabia. He was already sitting in the speaker's lounge, ready to present his opening keynote, when I wandered in, unaware that he was there. There was no-one else in the room. I walked over. We shook hands. We sat down. Then we talked.

The world according to Woz is one of sustained wonder at the many ways technology can be made to do our bidding. As a young boy growing up in the 50s and 60s, he told his father that he would one day own a computer. His father laughed and told him a computer would cost more than a house to buy. Computers in the 50s and 60s were indeed expensive. They were also almost the size of houses. But Woz's dream of one day owning a computer was realised when he began work for the Hewlett Packard computer company. Within a short time he was taking computers apart to see how they worked, and had soon had drawn up the plans to construct his very own computer - the Apple 1. He met Steve Jobs, who said 'we can sell this', and the rest, as they say, is history.

Now aged 62, and with a life time of achievements behind him, Woz has a great deal to say about schools and education. He even became a school teacher for a few years after he had made his fortune and had put Apple behind him. He believes that computers and digital technology are now our prime scientific and academic tools, but balances this with the view that regardless of the impact of technology on society, we still need rich personal and social interaction for effective education to take place. Hence, he says, teachers will always be needed. He is very determined to enforce the idea that children learn best when they are interested. When you have the desire to learn, he says, no-one can take that away from you. And yet, he argues, school is the one environment that currently teaches children that taking a test determines how 'intelligent' they are, but cramming for that test it is certainly not learning. He asks, are schools sending out the wrong message to children, when we ask them to study for test after test? Children are born curious, he says, and all of us - teachers, parents, society - should keep it that way.

On computers and design, Woz is adamant - he is only interested in designing devices that are interactive. 'They need to respond when I use them', he said, 'otherwise I lose interest'. On the nature of knowledge, he told me, all of us need to gain some 'fact' based knowledge, but that this is only the starting point, as we gain skills that will enable each of us to take our place in society. The man is insightful, inspirational and iconic. Yes, it's not often you get to speak to someone who has actually changed the world.

NB: The above content is taken from my conversation with Steve Wozniak, and also excerpts from his Keynote speech in Riyadh on February 5, 2013.

Photo image courtesy of Steve Wheeler

Creative Commons License
Changing the world by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.


February 06, 2013

You can't walk where I walk

Someone once told me that life is like a fast moving stream. You can put your foot into it, and even let it flow over you for a while, but you can never put your foot into the same river twice. That's quite profound, but there is something even more profound. It is this: You can't walk where I walk. In other words, you can't experience what I experience. We may be sat watching the same movie or TV programme. We may read the same book, participate in the same conversation, or sit in the same lecture. But your experience will be different to my experience. We may come away with similar messages or impressions of what we have observed or experienced, but because we are unique individuals, we are by nature different to each other, and our perceptions will also be different. That is one very important reason why in schools, standardised testing, homogenised curricula and batch processing by age need to be changed for more personalised approaches to education.

It's all down to individual perception - what psychologists call the 'representation of reality'. My reality is slightly different to yours and yours from mine. It has little to do with you and I viewing the same thing from slightly different angles, although sometimes that can be a factor in creating different perceptions. No, it's not about different angles, it's about different perspectives. A number of variables cause each of us to view life uniquely, and to represent reality from different perspectives, including our age, gender, culture, background, health, preferences, personal beliefs, in fact just about everything that wire our brains uniquely, and make us individuals. When teachers attempt to differentiate learning, they generally focus on aptitude and ability or in some cases, whether a student has a disability. Some teachers are sidetracked into considering 'learning styles' but that is a big mistake, as I have previously discussed. Carl Rogers advocated 'unconditional positive regard', a philosophy that plays out when every student is considered to be of equal worth in the classroom, regardless of their previous 'form'.

What teachers should be focused upon is the whole child, and how they perceive life and represent reality differently to everyone else in the room. Differentiation should encourage diversity not simply make provision for it. It should celebrate the fact that we are all different, and include every single voice in the classroom, giving each an equal weight. That's hard to achieve, but with some fore thought and practice, and a great deal of patience, teachers can encourage each student to participate fully and play to their individual strengths. We are not that different from each other really. We all have the same needs, to be respected, to feel we belong to the group and to have a voice. Each of us is the same, but in uniquely different ways. If you can understand that, then you will understand why you can't walk where I walk.

Photo by Steve Wheeler

Creative Commons License
You can't walk where I walk by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.


January 27, 2013

Game changers in the Training Zone

This week, ahead of my speech at the Learning Technologies Conference I recorded a 10 minute podcast interview for the Training Zone. You can listen by clicking on the embedded link below. If the link below doesn't work for you, try this one. My interview is at 18.45 in the podcast. To give you a taste of what was discussed, here is some of what I said in an excerpt from the transcript:

Q: What are the big technological developments we can expect to see implemented in 2013?

Steve: I think there are several that we have to look at as changing practice. I'm talking about disruptive technologies, things that will change probably forever - irrevocably - what we do in the workplace and in learning in particular. So for instance, one of the big developments I'm seeing happening right now is the move from keyboards and mice to touch screens and maybe even non-touch technologies.

One of the examples I've seen recently at the CES - the Consumer Electronics Show in Las Vegas - it was reported that there was a new touch screen device which goes 'lumpy' when you want to put a keyboard up on it. The keyboard actually appears but it's through crystallisation within the screen. The keys are actually surrounded by raised areas so that people with visual impairment for instance can use the touch screen tablet. So there are really practical developments coming out which I think are going to improve working conditions for lots of people with visual impairment.

But I think for all of us touch screen technology is already revolutionising the way we do things. Some people say that you will never see the death of the computer keyboard, but I'm not so sure. I think that in a few years time maybe our grandchildren are going to sit on our knees and say 'did you really have to touch a computer to make it work?' So I think touch screens and non-touch technologies, things like the Xbox 360 Kinect, technology with a depth camera and an infra red camera, I think is going to change forever the way we interact with technology. We are going very quickly towards the Tom Cruise Minority Report data manipulation.

I think another big development is going to be larger screens, flatter screens, in fact screens that are flexible. Screens that you can stick onto any surface so that you can make the whole of the wall of your office or your workplace into a television screen which doubles up as a computer screen and for data manipulation. And I think this is coming, I think it is going to be quicker than we think as well, these are some of the developments we are looking at.

I think ultimately, the biggest game changer which has been going on for some time now, is mobile learning. Using your own personal devices to access learning, access peer groups, access social networking, access the ability to create and share content, anywhere and everywhere. As we're talking I'm watching citizen journalism going on, on the television in front of me. This London helicopter crash that has happened. Most of the pictures the BBC are actually presenting at the moment are from people who were on the scene at the time with their mobile devices. I think we're going to see that impact a whole lot more. Those are some of the trends I see happening.  


Creative Commons License
Game changers in the Training Zone by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.


January 21, 2013

Telling your story

Blogs are not simply about text. They can also encompass hyperlinks, sounds, videos, and images. Blogging is also about telling your story. Today I was involved in teaching a session for a BA group on the use of digital photography and communication. Specifically, the session focused on images as narrative, and all of the group managed to create some impressive and in some cases stunning image sequences. I used images from a trip with my students to the Gambia in 2009 to present my own example of a narrative at the beginning of the session. I thought I would share it with you here on my blog. I hope you find it interesting.


This image is of a man looking out over the sea, in a coastal village in the Gambia. Poverty is commonplace here, given that the Gambia is one of the poorest countries in Africa. One of the few jobs most young Gambian boys can do is fishing. It's a dangerous, low paid job, and this image depicts some of the boats they use to launch themselves out to sea.


This image is of children collecting firewood for the compound cooking fires. There is no electricity or gas in most parts of the Gambia, so open fires are the most common means of cooking. Children also fetch water, sometimes from several kilometers away from their villages, and because of the necessity for this work, they often miss school. As a visiting group, my colleagues and I, along with our students, saw the need and raised money for a new well to be sunk in the village. The children don't have to walk 4 kilometers each time they needed to fetch a pail of water anymore. Now they can go to school.


I took this image of a young girl sat in a village compound. I couldn't resist capturing the photo, because it was so iconic and representative of the children in this part of the world, and it conveyed innocence and hope. I showed her the image on my digital camera, and she was shocked but delighted. She clearly recognised herself, but I don't think she had seen a camera before, and probably not an image of herself anywhere else other than in her reflection.


I decided to use a reworked version of the picture of the young girl in a blog post called 'What Price Education?' to hammer home the message that every child deserves a good education. In the Gambia, children can only go to school until they are 11 years old, because the state only funds primary education, and it's very basic. There are few secondary schools, and children can only attend them if their parents can afford the fees. Very few can. As a result, Gambian children are some of the most disadvantaged children in the world. I couldn't think of a better was to use the image than in a manipulated front cover of the National Geographic magazine. It was very easy to do. Using PowerPoint, I created a yellow background, and a smaller blue background for the frame, and then placed the image above. Finally, I chose appropriate coloured font styles to mimic the familiar National Geographic livery. I saved the image as a .jpeg file and then uploaded it to the blog like any other image. I hope you like the images and that in some way, they speak to you.

 Photos by Steve Wheeler

Creative Commons License
Telling your story by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.


January 19, 2013

We need a rethink

There's a very useful and refreshing article by Tom Barrett in this week's TES Magazine entitled 'Education needs to plug into Web 2.0'. Never before have I read an article that I agree with so completely. Those of us who are immersed in a world where the use of social media is so sustained, embedded and familiar, forget that many schools still ban the use of Web 2.0 type tools in their classrooms. Tom has some advice for schools who are in this category, and I quote:

"Perhaps one of the biggest barriers to engaging with the social web in schools is the perceived issue of safety: many teachers say they are left feeling helpless when pupils' work is available on the World Wide Web. I have been blogging with classes for eight years and these common-sense guidelines always work:

1) Be open to parents and allow them to share any concerns.
2) Moderate all comments before they are posted online.
3) Have a clear and robust e-safety policy.
4) Work within the school policy on images of children on blogs.
5) Publish a set of blogging guidelines on your site and share them with parents.
6) Make sure the whole school community is aware of your work."

Common sense indeed, but I would also add that schools should encourage and permit children to help teachers co-create the e-safety and school policies on social media use. They use these tools outside of the school on a daily basis and often have a sophisticated grasp on how social media work. Who better to inform schools than the users themselves?

I once spoke at an event where a school leader remarked that his school had banned access to blogging, YouTube and all other social media because 'they are dangerous'. I countered by asking him whether we should also stop teaching children how to cross the road, because traffic is dangerous too? I think he got the message. Where better to teach children about the dangers and risks of using the Internet, than in school? I think a rethink is very much overdue.

Whether this blog post, or Tom's article, or any number of other good pieces of advice will have an impact on the impasse many schools find themselves in with relation to social media use in schools, remains to be seen. But just a few moments thinking about the risks (and balancing those up against the clear benefits social media have in schools who do allow them) should convince most school leaders that adopting social media in the classroom really is the best way forward.

Image source

Creative Commons License
We need a rethink by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.


January 15, 2013

Pelecon flies higher

Those of you who have ever attended a Plymouth Enhanced Learning Conference, or even followed from afar via the social media channels will know that Pelecon is an extraordinary event. Since attaining international conference status and extending its programme to 3 days, Pelecon has become one of the must-attend European learning technology conferences on the calendar. The event attracts learning technologists, lecturers, researchers, teachers, learning professionals, health and medical staff, private trainers, and just about anyone who is interested in the very latest in digital pedagogies, literacies and technologies. In previous years delegates have enjoyed listening to high profile and diverse keynote speakers such as Stephen Heppell, Keri Facer, Gilly Salmon, Graham Attwell, Sherry Terrell, Jane Hart, Josie Fraser and Alec Couros.

This year the conference takes place between 10-12 April. For 2013, we have lined up a veritable feast of leading speakers, all of whom are featured on the Pelecon conference website, and this year promises to push the boundaries even further than before.

#pelc13 is set in the delightful South West coastal English city of Plymouth. The surrounding Devon countryside is stunning as it unfolds in springtime, the towns and villages are steeped in history (the Mayflower Steps and Plymouth Hoe are a must for all tourists to visit) and the culture is rich. The Conference social events including a Wednesday evening Teachmeet, are guaranteed to be fun, entertaining and engaging.

This year the Pelecon Conference dinner will return once again to the visually stunning surroundings of the National Marine Aquarium. Located in Sutton Harbour in the historic Plymouth Barbican area, the Aquarium is the largest in the UK, and is one of the premier tourist attractions in the South West of England. Delegates who enjoyed the conference dinner at previous Plymouth eLearning Conference events in 2009 and 2010 were unanimous in expressing their praise for the evening.
The Dinner starts on Thursday evening, April 11th, at 7.30 pm with welcome drinks and an exclusive tour of the entire aquarium by official guides. Delegates can watch as the sun sets over Plymouth while fishing boats and other marine vessels arrive and depart from nearby Sutton Harbour. The three course dinner will be served in the Atlantic Reef area of the Aquarium, where diners can watch the sharks and other large fish swimming in one of the largest glass tanks in Europe, whilst they enjoy their meals. The company will be great, the food will be excellent, and the live music will be splendid. The price for the evening isn't bad either - at only £40.00 per head. The bar will be open until 11 pm, and then afterwards, the nearby Barbican and Coxside watering holes will be sure to offer a warm welcome to any delegates who wish to linger to explore Plymouth's nightlife a little more. The Conference Dinner has only one drawback - there will only be about 110 dining places available at the Aquarium, so please book your place for this exclusive event soon to avoid disappointment!

The Pelecon Conference organising committee look forward to joining with delegates at the Conference Dinner at the National Marine Aquarium. We hope you can attend the conference.

Photo by Jose Luis Garcia
Creative Commons License
Pelecon flies higher by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.


January 14, 2013

Touch and go

This is part 9 in the series on the future of learning and technology. Everything it seems, is being disrupted. By this I mean that new technology is arriving all the time, and much of it is changing forever the way we do things, the way we think about things, and the way we use things. The reason technology has the capacity to be so disruptive, is that it moves more quickly than industry, business, education, health, entertainment, in fact just about every part of the society we live in is constantly struggling to keep pace with it. As Larry Downes wrote recently: 'Social, political and economic systems change incrementally, but technology changes exponentially.'

In these exponential times, we can expect new technologies to emerge with increased regularity, and we can expect more and more to find ourselves scratching our heads, figuring out how we are going to harness their strange power and potential within familiar situations. Disruption occurs when we introduce new technology that impacts so dramatically upon previously familiar practices, that it changes them irrevocably. Stephen Heppell remarked that technology radically changes everything it touches. We will never return to the days of linear tape machines. Audio and video tape were replaced by digital media. The physical presence of music media is being rapidly eroded by digital media. The vinyl discs I used to buy as a teenager are already curios, and CDs will also become collector's items as they slowly begin to disappear from our high street shops. High street shopping itself is in very real danger of disappearing too, as online stores strengthen their grip an on entire generation of consumers. Photography, telephony and telecommunications, travel, leisure, commerce, news gathering, marketing, movie making, the list goes on of hundreds of industries that have been forever disrupted by technology. Digital skills are at a premium. If you don't possess the skills to use a computer or other digital device, you automatically exclude yourself from the majority of jobs currently available.

In a recent blog post, I wrote about the Internet of Things - a world where every object is connected to the Web. I wrote that 'Once upon a time, objects were simply objects. They only came alive in Disney cartoons'. Now, the announcement of a new technology called Touché has the potential to change forever they way we interact with everyday objects. And ironically, it has emerged from research by Disney Corporation. Touché uses a Swept Frequency Capacitive Sensing technique to make just about any every day objects 'aware' that users are touching them. From door handles to sofas, once connected, objects will be context aware, and respond to our natural gestures. The manufacturers claim that using the technology may ultimately render keyboards and other peripherals completely redundant. Touché can detect whether humans are present or absent, and a variety of multi-touch gestures can be programmed to be recognised. Watch the video below to see for yourself the full potential and fascinating implications of this technology. Ask yourself how it could be applied in your own area of work. And then prepare yourself for disruption ...



Creative Commons License
Touch and go by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.


January 11, 2013

The foresight saga

Vuzix M100 Smart Glasses
This is part 8 in the series on the future of learning and technology. At the start of each year everyone it seems, goes into the prediction business. The first week of 2013 saw many articles appearing on what we can expect to see this year. A large number of the articles were about new technology trends, and there was much speculation about how certain technologies might transform our mundane little lives. With the massive Consumer Electronics Show CES 2013 opening its doors last week in Las Vegas, technology news was making prime time TV all across the globe too. The stars of CES 2013 were the Vuzix M100 Augmented Reality Smart Glasses (pictured), Samsung's new ultra thin bendy phone screen and the 4K ultra high Resolution television screen. These are not future technologies. They are technologies for today, 2013. 4K resolution is not enough it seems. Already there are articles predicting beyond 4K into the exotic TV world of the future where transparent televisions (what the...?), and even 'choose your own size' projected wall TVs will roam majestically across the prairies. Entertainment will literally go to the wall.

But what of the future? What are the tech-gurus saying we should look out for this year? The BBC's New Year's eve article 'Who will call it right in 2013?' seemed to hold a competition amongst the illuminated ones, the technology soothsayers of our age. Peering into their digital chicken guts, each gave it their best shot (without sticking their necks out too far, thus avoiding any potential damage to their stellar reputations) predicting what we can all expect to bump into as we turn that chronological corner. The article should perhaps be re-titled 'who will call it at all in 2013?' because many of the so called 'predictions' were banal to say the least.

Robert Scoble (the celebrated blogger) stayed safe and on piste, predicting that 2013 would be contextual. He talked of heads up displays (Google Glasses and the Vuzix M100 are already gearing up for mainstream release) that we could use when we all go skiing (yes, we can all afford alpine holidays in today's burgeoning economy. I'm just nipping off to Gstaad), to brag to your friends through the gift of video evidence just how high you climbed before you fell drunk from the ski-lift, and how long was your 'hang' time on your latest jump. That's if you have any friends left. How's that for context?

Dave Coplin, chief envisioning office at Microsoft (every company should have one) was even safer in his predictions, suggesting that 2013 will be about mobiles, data and trust. More and more, he suggests, data are (he says is) going to be the lifeblood of all our activities. And mobile devices will offer personalisation and  will become the first point of contact for everything we do. Well, who knew?

Mark Cook, chief executive of Getronics UK and Ireland (yep, a household name) takes the prophet's mantle for the safest prediction for the year. He reckons that many companies will move away from BYOD (Bring Your Own Device) to CYOD (Choose Your ... etc). Interesting, as many companies don't even have a BYOD as a policy yet. Cook thinks that CYOD will place the initiative back in the hands of the organisation,  offering employees a device of its own choosing. That's novel. Now why didn't I think of that? I guess you will be able to choose any colour you like, as long as it's black.

So the future is much the same as the present then. I think I'll stick to CES in the future.

Image source

Creative Commons License
The foresight saga by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.


January 09, 2013

Global learning collectives

This is part 7 in a series of posts on the future of learning and technology. I spent the last two years of my school life at AFCENT* International School, in Brunssum, Holland. There was one word to describe AFCENT School - diversity. I remember how culturally rich the experience was, because children from all of the NATO** countries attended, and I often sat alongside American, German, Canadian, French, Norwegian and Swedish classmates.

I discovered that this school's education was far more than just the three 'R's. We learnt phrases from each other's languages (slang and swear words were particularly good fun to practice), heard about unfamiliar customs and practices, and sampled strange and wonderful food and drink from other countries. I should point out that in the 1970s Britain was far less multi-cultural than it is today. This was the age of the cold-war and our parents were serving in the military. We took part in multi-national games that went on all day, where we played the roles of politicians and generals, as we tried to avert a nuclear war. We produced and performed in musicals such as Godspell, Jesus Christ Superstar and Fiddler on the Roof in the school assembly hall. We learnt to play the games of other countries. It turned out that Baseball and American Football were less of a mystery for us Brits than Cricket was for our American counterparts. Who knew?

We learnt traditional songs and stories we would never otherwise have encountered, because each child could not avoid bringing their own personal stories, history and culture into the classroom. German Christmas, Canadian Bring 'n' Buy sales, and American cheerleaders were not something I had encountered in any English school. Believe me, if we'd had American cheer leaders at school in England, I would never have missed a lesson. At AFCENT School we literally had the best of both worlds by attending an international school. Not many school students are as privileged.

Some years ago, I saw several schools try to replicate this cultural richness through the use of video links to connect two (or more) classrooms together across distance. It was a great step beyond the pen pal letters we used to write when I was in secondary school in the 1960s. Then we had to wait for days or weeks for a reply. Now whole groups can meet and converse with each other in real time without travel. Language learning, cultural exchanges, personal stories, preparation for overseas school exchange visits and a whole host of other benefits can be realised when children collaborate and share their learning across language and cultural divides. The excitement of connecting with children in schools in other countries was tangible. Some schools who connected using videoconferencing manage to project the live video images onto big screens so that large groups could participate, and the kids loved it.

Video conferencing was just the start. We now have several alternative technologies that will allow schools to connect cheaply and easily with school children in other countries. One of the futures of education will be greater connectivity between schools around the world. Through the use of social media meeting tools such as Google Hangouts, video sharing tools such as Skype, and even massive online open games, students around the world already enjoy better chances to learn from each other and with each other, regardless of their geographical location. How will this develop? I foresee the emergence of global learning collectives where children will learn together across schools and time zones, collaborating on projects and other joint activities, and where technology will help us to once and for all bridge the great divides of geography, culture, creed and ethnicity.

*AFCENT = Allied Forces Central Europe. NATO = North Atlantic Treaty Organisation

Image source

Creative Commons License
Global learning collectives by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.


January 08, 2013

Where AR we now?

This is part 6 in a series of posts on the future of learning and technology. Technology is great for many things, but perhaps its most useful application is enabling us to do things better, faster, smarter. Augmented Reality (AR) is one such tool that has a lot of potential to enhance our senses, but to date has had poor uptake and real life application in the world of learning.  AR typically provides the user with additional information than can be obtained naturally. It takes live views of the real world around you and augments them with computer generated sensory information such as graphics, data, video or sound.

Examples in include smart phone applications such as Layar, which use the GPS and video camera tools to position the user in an information sphere, and feed them contextual information related to that specific geographical location. This can include information about local environment, navigation of complex transport systems (see the embedded video below featuring Acrossair's New York subway app), weather, news and  amenities, as well as cultural or historical information, and even social information. You might for example, wish to discover who else in your location is using Twitter or another social media tool. The opportunities to use such applications in education are fairly obvious, but not everyone has access to the technology, and it can be quite difficult to use effectively if you are able to gain access.  Part of the problem is the inconvenience of having to hold your phone up if you wish to interrogate your environment. A better, more intuitive application of AR is the use of large screens (see the image above, taken in a Westfield shopping centre, London). Better vision, and a more natural means of interrogation of one's surroundings can be achieved using this technology, and objects can be rendered in 3D using simple marker technologies (see BBC this video for a vivid demonstration of some upcoming AR features and uses).



Perhaps the most promising and intuitively easy to use AR version is the wearable (or eye wear) application seen most recently in Google Glasses. A simple heads up display (HUD) is located in the upper right quadrant of one lens of a reasonably normal looking pair of spectacles, and users can control what they see with their mobile phone. Eventually, natural gesture control (such as a head tilt, wink of an eye) or voice control will be developed to enable even more natural and unobtrusive AR use. It has had its problems and suffered a few teething difficulties, but I believe that AR is on its way to a learning environment near you and it will catch on quicker than we expect. Our desire to learn more, and to learn while on the move at any time and in any context, will ensure that the wearable AR device will be available for an affordable price very soon. What educators do with them next, is really down to each individual's creativity and imagination.

Photo by Steve Wheeler

Creative Commons License
Where AR we now? by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.


January 05, 2013

Digital classrooms

This is Part 5 in my series of posts on the future of learning and technology. A few years ago Peter John and I wrote a book entitled 'The Digital Classroom'. It was published by Routledge in 2008 and is now also available as a Kindle reader version. It wasn't the first published under that title, and it probably won't be the last. The idea of a 'classroom' (regardless of how anachronistic that may sound) is appealing when it is 'digitised'. It's the old, comfortably familiar territory embellished with the new. Everyone in the world of education it seems, has an interest in how technology is going to influence what we do in the classroom. The book was received well, and we received some positive comments and feedback. Although the book is probably a little dated now, with technology advancing at rapid pace, it still set a benchmark for some of the things we could expect to see in the coming years. We talked for instance about how technology would streamline assessment, and how the curriculum might be impacted by new technologies. There were sections on digital literacies and mobile learning, both of which we considered to be important for the success of education and learning in the future. Blogs and wikis and other social media made an appearance, even though at the time they were still fairly nascent in compulsory education. We even mentioned the Semantic Web (or Web 3.0) as a potential horizon technology for learning. We spent a lot of time talking about digital cameras and interactive whiteboards, both of which have had dubious success in the school classroom.

Ultimately though, we could not have predicted the new tools and technologies that will become very much a part of normal school life in the recent and coming years. We did not foresee the touch tablets and their rapid success in schools, nor did we predict the rapid rise of smart phones and apps, or the potential of augmented reality. The non-touch motion sensing gestural interfaces now emerging (for example the Xbox 360 Kinect) and the voice activation applications were still just a gleam in the eye for many of us. Perhaps we should not have titled the book The Digital Classroom, but simply Digital Classrooms, because now we know that there are many possibilities, and that classrooms that have digital capabilities are many and varied. If I was to take a risk and suggest possibilities for the next 5 years of development, I might be right on some of my predictions, and hopelessly wrong on others, but here we go...

The signs are there that in the coming years, more gestural interface technology will be available for learners, and that advances in manufacture and design will enable the installation of screens on walls, desktop, in fact on any flat surface. The screens will be resilient and high resolution, but as thin as a sheet of card. The mouse, and keyboards such as the one in the image above, may disappear completely in favour of voice and gesture activated tools. For students with mobility issues in particular, this may turn out to be an important leveler. Smart touch devices will continue to develop too, so that every student will have the means to access all their learning resources right there in their hand, wherever they are, and whenever they need them.

Much more learning will be done outside of the classroom. Digital classrooms will become the place where learning is performed, celebrated and assessed - on large wall screens for all to enjoy. For many teachers, learner analytics will become an indispensable tool for tracking student progress and intervening when necessary. Many governments will probably insist on it and legislate accordingly when they realise just how much data can be mined from personal activities across the web. Eye tracking and attention tracking will also emerge as useful behaviour management tools for teachers in the next few years. Gamification and games based learning will establish a stronger foothold in classrooms as teachers realise just how powerful self-paced, self-assessed task oriented and problem based learning can be.

Probably the most important development I foresee though, is the emergence of student developed applications. As technology increasingly takes its hold on the school classroom, so students will become increasingly adept at coding. There is more scope than ever for children to experiment with computers. The Raspberry Pi is just the first of many tools to support this. The result will be the creation of a vast array of student games, mobile apps and eventually new forms of hardware (See this TED talk by 12-year old app developer Thomas Suarez). Many of the new apps and games will be made commercially available. Schools working in partnership with commercial companies will ensure it happens. We may even see some children achieve millionaire status before they leave school, and it will become commonplace for young people to be entrepreneurs before they reach higher education age. Now there's incentive.

A lot of learning comes from doing, making and problem solving. One of the most important contributions technology has made to education over the last decade can be found in its provisionality - that with digital, nothing is necessarily graven in stone, anything can be changed, upgraded, edited, revised, deleted. Learning in digital classrooms will be much more exciting, because learning through failure and experimentation will engage learners thoroughly in the right conditions.

Finally, a word of warning. We don't know how long these developments will take, nor do we know for sure  if they will materialise, because it is very hard to predict the future accurately, and schools are conservative places where change can be very difficult to achieve. What we do know is that the future will be very different from anything we can imagine right now. As ever, your comments and views on this article are very welcome.

Image source

Creative Commons License
Digital classrooms by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.


January 03, 2013

The future of classrooms

This is Part 4 in my series on the future of learning and technology. What will be the future of school classrooms? It is unlikely that we will see the demise of the classroom in the next decade. Those who study the future of education often suggest that the demise of traditional classrooms is not only inevitable, but imminent. This is due to the rapid proliferation of mobile technology, the disintermediation of traditional teacher and student roles, new trends such as MOOCs and the upsurge of user generated content on social media sites -  all of which take learning away from previously familiar territory. The argument that these tools and trends are removing the need for classrooms and 'schools' in specific geographical locations is a strong one, but also has some flaws.

In a recent article, Larry Cuban attempts to gaze 10 years into the future, and makes the case that classrooms will stay very much the same during this period. Firstly, he argues, teachers tend to use new technology in much the same way they used old technology, and that as a result very little has changed in terms of pedagogy. Secondly, he suggests that technology is overhyped and is not future-proofed, especially against 'major unplanned events', although what these might be, he fails to elaborate. Anyone who is familiar with Cuban's work will think 'well he would say that, wouldn't he?', but is he right?

One of the future developments he is optimistic about, however, is the lightening of students' backpacks. Cuban believes that the digitisation of texts (books, encyclopedias and other paper based knowledge) will take hold and become an important trend. He predicts the obsolescence of the hard bound book, at least in the hands of school children. Automated assessment of learning through computer adaptive testing is another trend he predicts, where students are given grades based on their performance on multiple choice questions. Implicit within this scenario is learner analytics, where the data mining of all student scores, attendance levels, social media postings and discussion group contributions can be analysed to provide teachers with an overview of where the student is, and whether any intervention is required. Also implicit within this prediction is the need for teachers to adopt new roles, change their professional practice, and move from instructors to facilitators and moderators.  It also means that teachers would need to revisit their concepts of knowledge and learning, and begin to accept that often learning occurs without their direct input, both inside and outside the classroom. Many teachers would welcome such a shift in practice, whilst many others might feel very threatened by such a seismic shift in the profession.

Cuban is very sceptical of online courses, and presumably his sceptiscism also embraces MOOCs. He believes that online learning has repeatedly failed to deliver its promise. His argument here stems from the human need to socialise, to gather together face to face, and learn firsthand the cultural, moral and civic values we hold so important in today's society. Online course, he argues, fall very short of delivering this richness.

Cuban sees a place for technology in schools, but does not see it radically changing the face of the 'place for education', and says:

'...by 2023, uses of technologies will change some aspects of teaching and learning but schools and classrooms will be clearly recognizable to students’ parents and grandparents.'

Is he right? Will we see no radical change in schools in the next 10 years? Will it take longer for us to witness transformational changes in our education institutions, or are the changes above sufficient to revolutionise pedagogy? Are schools too conservative and resistant to change to be impacted by new technology? Is technology the only catalyst for change, or should we look elsewhere? As ever, your comments on this blog are welcome.

Photo by Paul Shreeve

Creative Commons License
The future of classrooms by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.


January 02, 2013

The future of intelligence

This is Part 3 in a series of blog posts on the future of learning and technology. In my previous blog post I examined the debate about whether we are becoming more intelligent or less intelligent as a result of our prolonged and habituated uses of technology. I believe that if we are to fully apprehend the many issues and nuances of our relationship with future technologies, we first need to begin to appreciate the complexity of human intelligence(s) and the problems associated with trying to model these digitally.

Many commentators express concern about the negative impact technology may have on our ability to think critically, construct knowledge and read/research more deeply. The argument is that we are becoming increasingly dependent on search engines and other tools, that trivialise knowledge and simplify what we learn. A secondary argument is that there is a large amount of content on the web that is spurious, deceiving or inaccurate, and that user generated sites such as Wikipedia and blogs undermine the authority of professionals and academics.

Futurologist Ray Kurzweil's argument looks beyond these issues, holding that the tools we have available to us as a result of networked social media and personal devices, actually enable us to increase our cognitive abilities. He argues that we are becoming more creative and have the potential for endless cognitive gain as a result of increased access to these technologies. His position is reminiscent of the work of American cognitive psychologist David Jonassen (1999) and his colleagues, who proposed that computers were mind tools, and that our cognitive abilities could be extended if we invested our memories into them. Others, such as George Siemens and Karen Stephenson hold that we store our knowledge with our friends, and that connected peer networks are where learning occurs in the digital age. British computer scientist and philosopher Andy Clark, is of the opinion that we are all naturally aligned to using technology. In his seminal work, Natural Born Cyborgs (2003), Clark sees a future that combines the best features of human and machine, where we literally wear or physically internalise our technologies.

There are examples of how such cyborg existence might come about. Recently, demonstrations of Google Glass, eyewear that connects you via augmented reality software and gestural control to information beyond your normal visual experience, and Muse, a brain-wave sensing headband, have veered us in the direction of cyborg experience. I predict that other devices, wearable, natural gesture based, and sensor rich, will appear in the next few years, and these will be affordable to many. And yet, as science fiction writer William Gibson intoned, the future may be here already, but it's just not evenly distributed. He is right. A persistent digital divide exists between the industrialised world and emerging countries. Mobile phones may be proliferating rapidly, but Divides are also evident within western digital society where some invest in new technology, and a whole spectrum of other responses, from mildly enthusiastic to outright rejection are present in the population. There are even divides between those who can use the technologies and those who can't. Technology remains unevenly distributed, and will be for some time to come. But the digital divide will not stop the march of technology. What might wearables and non-touch interfaces achieve for us?

It is debatable whether wearable and invasive technologies will increase our intelligence. What such tools might be able to do though, is free us up physically, enhancing our visual capabilities, and enabling us to control devices hands free. They will also enable us to free up cognitive resources, by distributing our thinking and memory, enabling us to focus on important things such as creativity, intuitive thinking, critical reflection and conducting personal relationships, while the wearable computer navigates, searches, discovers, stores, retrieves, organises and connects for us. It will not make us smarter, but technology will enable us to behave smarter, work smarter and learn smarter. That's if we accept that ultimately, the success or failure of such tools is really down to us and us alone.

References
Clark, A. J. (2003) Natural Born Cyborgs: Minds, Technologies and the Future of Human Intelligence. New York: Oxford University Press.
Jonassen, D. H., Peck, K. and Wilson, B. (1999) Learning with Technology: A Constructivist Perspective. Upper Saddle River, NJ: Merrill Prentice Hall.

Photo by Jussi Mononen

Creative Commons License
The future of intelligence by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.


January 01, 2013

Is technology making us smarter?

This is part 2 of the series on the future of learning and technology. When discussing the future, especially the future of technology, there are some writers who almost always seem to be quoted. Near the top of the list is the futurologist Ray Kurzweil, who has much to say about our technological future, and also about the growth in human intelligence. His views are quite optimistic, especially around computers and the nature of knowledge. Kurzweil popularised the concept of 'the Singularity', but it was science fiction writer Vernor Vinge who originally coined it. In a nutshell, the Singularity describes a tipping point in technological development when computers exceed the power of total human capability. This will occur, Kurzweil argues, due to a rapid advance of technology and proliferation of human and machine intelligence. Whether we shall see the Singularity is one question. Whether it will have such as profound effect on our society and our humanity as Kurzweil and other predict, is an even bigger question. We simply don't know if computers can or will surpass human thought, or what the implications might be if they eventually do. Such questions have for years been a focus of the Strong vs Weak AI (Artificial Intelligence) debate.

In Kurzweil's view, technology and the human mind are symbiotic, reliant upon each other for their mutual development.  His vision of the future requires humanity to become increasingly intelligent, made smarter because of increased opportunities to connect, create and find knowledge across the network. James Flynn, (2012) of the University of Otogo in New Zealand reveals that over the last century, IQ scores have been steadily rising from generation to generation. Whether this occurs as a direct result of access to technology and greater opportunities for networking, is yet to be established. But, intuitively this seems to be a reasonable proposition.

There are those who argue the exact opposite, that humans are becoming less intelligent and more dependent upon technology. This perspective is championed by Nicholas Carr (2011), who provocatively argues that habituated use of search tools such as Google is 'making us stupid'. Carr's essential thesis is that we are bombarded with content on the Internet, and cope with this by reducing our depth of study whilst increasing our breadth of study. In other words, he argues, we tend to skim read and miss out on the richness of meaning we would have absorbed pre-internet. In his original publication, Andrew Keen (2007), was adamant that the Internet is undermining the authority of academics and is a threat to our culture and society. In his most recent edition, Keen turns his ire specifically onto user generated media such as blogs and YouTube (Keen, 2010). Tara Brabazon (2008) appears equally cynical about the impact the Web is having on this generation of learners, but provides a more measured response. She suggests that it is an error for universities to invest more in technology than in teacher development, and in so doing, opens a debate on the future of education in the digital age.

So the future of technology supported learning is uncertain and contested. Are we being made more intelligent by our habituated uses of technology, or are we becoming smarter because we have more opportunities to create our own content, and think more deeply about it? Does our collective increase in intelligence owe itself to better connections with experts and peers, or should we simply put the growth of knowledge down to a natural, progressive evolution of the human mind? Is technology actually a threat to good learning, creating a generation of superficial learners, or do interactive tools such as social media and search engines provide us with unprecedented access to knowledge?

Such questions are exactly what the study of the future is all about.  

References
Brabazon, T. (2007) The University of Google: Education in the post-information age. Burlington, VT: Ashgate.
Carr, N. (2011) The Shallows: What the Internet is doing to our brains. London: W. W. Norton and Company.
Flynn, J. R. (2012) Are we getting smarter? Rising IQ in the 21st Century. Cambridge: Cambridge University Press.
Keen, A. (2007) The Cult of the Amateur: How Today's Internet is killing our culture and assaulting our economy. London: Nicholas Brealey.
Keen, A. (2010) The Cult of the Amateur: How blogs, Myspace, YouTube and the rest of today's user generated media are killing our culture and economy. London: Nicholas Brealey.

Image source

Creative Commons License
Is technology making us smarter? by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.


December 30, 2012

Facing the future

At the end of each year many of us tend to focus on the future, wondering what it will bring. We wish each other a happy New Year, and hope that life will treat us kindly. We try to shape our own futures by making New Year resolutions, many of which fall by the wayside after a week or two. Much of our future is not ours to shape. But still we persist in trying to predict the future.

Many of our predictions about the future are based on speculation or wishful thinking. Remember the personalised jetpacks we were all going to use, and the Moon colonies many thought would be established in the 1970s? No matter what we think we 'know' about the future, we are unable to predict the future with one hundred percent confidence. Gambling casinos and bookmakers make a fortune out of our desire to guess what will happen next. On 21 December 2012, many people held their collective breaths because of a well studied, but poorly understood 'prophecy' about the ending of an age. Some sold their houses, or gave up their jobs in preparation for the 'end of the world', and were relieved and disappointed in equal measure when nothing happened. The Mayan Apocalypse did not happen. Many of us didn't believe it would. We have seen it all before, several times. Down through the ages self appointed religious cult leaders have predicted the return of Christ, or the start of Armaggedon, or some global catastrophe, largely based on their own personal interpretations of texts or 'signs'. This always spreads fear and uncertainty to many. All the modern day prophets have failed, but have ruined the lives of many gullible and impressionable people in the process.

What about teachers and schools? If we try to predict what will happen to education in the next year, we will probably have reasonable success, especially if we work within the teaching profession. Those of us who are engaged as learning professionals tend to see the trends first, and can better understand the nuances and vagaries of education better than the average 'man in the street'. This is why practising teachers are better placed than politicians to offer ideas for improving education. The caveat is that if we try to predict what will happen in education over a longer time scale, say 3 to 5 years time, we become less accurate, because there are random events, changes in policy, variations in world economy, new technologies, or other unknown variables that can happen to change the terrain.

And yet, you and I have a sneaking suspicion that if we do not try to anticipate the future, and make ready to respond to changes as they occur, we will be caught off guard. And we would be right. Anticipating change is a natural part of our survival strategies, and should be encouraged. So we have a conundrum. Do we try to predict the future and risk being badly wrong, or do we just let the future roll over us and try to adapt to it? If we decide on the latter, then we will be at the mercy of change, and not only will education suffer, more importantly, the children and young people in our care will be affected. If we decide on the former, then at least we have made a choice to try to anticipate the future, and we have an outside chance of being right. The less timescale we try to predict, the more chance we have of being right. The farther we try to gaze down the corridor of the future, the more risk we run of being wrong, because there will be more opportunities for unpredictable things to occur.

Over the next few blog posts I intend to examine some of the predictions that have been made on the future of education, with specific reference to technology and the role it will undoubtedly play.  Some of the predictions will be fairly inevitable, others will be wildly speculative, and many will sit somewhere in between, as possibilities that may or may not become reality. If we are prepared for change, then we will be less likely to be taken by surprise. We can at least prepare for a successful new year of teaching and learning based on what we believe is just around the corner.  But we still need to live and work in the present.

I wish you a happy and successful New Year.

"Learn from the past, prepare for the future, live in the present." - Thomas S. Monson

Other posts in this series
Is technology making us smarter?
The future of intelligence
The future of classrooms
Digital classrooms
AR we there yet?
Global learning collectives
The foresight saga
Touch and go

Image source

Creative Commons License
Facing the future by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.


December 29, 2012

Communication and learning in a digital age

The latest issue of the online open journal eLearn Centre Research Paper Series has just been published. Issue 5 considers Communication and Learning in a Digital Age, and features papers from a number of scholars in the field, including my own paper on current research perspectives on digital literacies. The papers originate from a conference held in Barcelona in the Summer of 2012. Here is the introduction, written by Sandra Sanz and Amalia Creus (Open University of Catalonia):

Experience of time and technology also has an important impact on learning. The drastic reduction on lifetime of knowledge, connected with the overflow of information and fragmentation of sources, are just some of the features that are changing the way we learn. This situation challenges us to think more creatively about the interaction between communication technologies and learning, and to explore how our educational models are being impacted by the processes of social change that come with digitalization, the emergence of social media and the Web 2.0. 

Since February 2011 the group ECO (Education and Communication), driven by teachers of Information and Communication Studies at UOC, has been providing a forum for researching communication and learning, and for sharing teaching innovation through e-learning environments based on collaboration, creativity, entertainment and audiovisual technologies. 

The five articles in this edition of eLC Research Paper Series reflect the short but intense trajectory of the group. Some of them are a selection of papers presented at the International Conference BCN Meeting 2012, organized by ECO. The other articles were written specially for this issue by members of the group and give a picture of the themes and questions we are now exploring. 

For those who may experience problems downloading my Digital Literacies paper from the site (it doesn't work well on Macs) below is a downloadable .pdf version.



Image source

Creative Commons License
Communication and learning in a digital age by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.


December 22, 2012

I'm dreaming of a White laptop...

Me with Keith in 2009
... or any colour really. It doesn't matter that much, as long as it does the job. My old laptop Keith (named after a Rolling Stone - he's very old and has seen better days, falling to pieces, but is still just about functioning), is just about to pop his clogs I fear. I'm not sure if Keith will even make it to the conclusion of this blog post, he's making ominous chugging and whining noises (which I suppose is also a little like his namesake). You see, Keith is almost 8 years old. In computer years that is way beyond geriatric. His CD-ROM drive packed in ages ago, some of his USB ports have ceased to care, and all of his appendages, his rubber feet and other accouterments have long departed. Even his volume control has shuffled off this mortal coil. He is quite frankly, in a sorry state. But still he soldiers bravely on. If I take Keith out to use with my students, there are often remarks like: Wow, is that a museum piece? or, OMG that laptop must be almost as old as you! Cheeky beggars. They lose a few grade points for that.

In fairness, Keith does come from another era. He is chunky and thick, his battery has hardly any life in it (the only way I can operate him is to plug him in to the mains), and he is slowing down noticeably. If I don't remove him from my lap after 30 minutes I risk sustaining scorch marks to my legs, because he heats up to the point of shutdown. He takes an eternity to boot up every time I switch him on. He takes ages to shut down. He finds it difficult to do simple tasks, like opening a new browser window. Did I mention he is very, very .... slow? He suffers from the laptop version of arthritis I guess. As we get older, we all suffer from some form of mobility issue, but for Keith it has become a part of his core personality. If he ever did anything fast, I think I would run out of the room in shock.

He is crashing out on a regular basis these days. Self induced coma. Keith is asleep more than he is awake, and several times I have thought I have lost him forever, given some of the error messages I see on the screen. Once or twice he has refused to get out of bed at all, but after a few days of black screen, he mysteriously resurrects himself. It's as if he is struggling to escape his inevitable eternal dark void. But the best thing about Keith is that he never suffers from a loss of memory. Not since I invested in an external hard drive. Now Keith never loses any data. Because it's all offloaded into an external medium, which is kept separately to him, in case he ever suffers from the computer equivalent of incontinence or something worse.

I still take care of Keith. I have not dropped him since that notorious incident at a conference in 2007. He survived, but for several glasses of wine and the table cloth, it was a terminal experience. These days Keith doesn't travel with me to far off destinations. You won't see him at conferences, weddings or Bar-Mitzvahs anymore. He is too old for air flights now. He resides at home where he is comfortable, chugging slowly along, performing his tasks in his own time. I wouldn't want to bury him in some far off foreign field.

So it is time for a new laptop. Christmas is nearly upon us, and I will be disappointed if I receive any more gifts of socks, frankincense (Brut aftershave) or myrrh. Gold I will cope with. But this year, at the risk of offending my anthopomorphised little digital companion, and hastening his sad demise, I want a new, fast operating, graphically rich and very streamlined laptop. I want a device I can take with me everywhere, use any time, quickly and without too much fuss, and certainly without attracting any snide comments from my students. And yet, whatever Santa brings me, whatever shape and form my new laptop takes, I will always think fondly of Keith, my faithful laptop from which all my blogs, slideshows and articles have emanated over these last 8 years.

And in the future, if he is still able, I will occasionally fire him up just to say hello. And I will remember.

Photo by James Clay

Creative Commons License
I'm dreaming of a White laptop... by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.


December 17, 2012

Headline Muse

The Muse Headband
They have finally done it. Someone has come up with a way to control computers using mind power. And the device is non-invasive. At least, that's what InteraXon, the company who has designed the Muse 'brain sensing' headband wish to achieve. 'It lets you control things with your mind' runs the sensational strapline for the Muse Headband promotional video. Mind control? This will sound quite sinister to many, and others will be far from convinced. After reading the authoritative but still controversial book Physics of the Future by Michio Kaku recently, I have a more open 'mind' on the matter. I don't doubt the claims InteraXon make, or at least I won't when I get to see Muse demonstrated with my own eyes. But I think 'brain sensing' is an unfortunate tag line. Could it imply that there is no brain there to sense? Are they perchance anticipating that brainless people will buy the device? To me 'brain sensing' infers that it is detecting whether there is a brain present, rather than the more spectacular functionality it can potentially offer. Perhaps InteraXon ought to revise their tag line so it more accurately represents the capability of the device. You see, Muse is actually a wearable Electroencephalograph or EEG, with 4 sensors that are positioned strategically around the Alice-band style headgear that you wear.

If you can get past the irritatingly repetitive and slightly-louder-than-it-should-be background music on the video, and ignore the embarrassing geekiness that exudes from some of the presenters (I think it's really cool!), the Muse headband does look like it has the potential to be a breakthrough technology. The last time we had a true technological breakthrough of any magnitude was 7 years ago, when Microsoft released the Xbox 360 Kinect. Kinect was truly revolutionary because it pointed up all sorts of possibilities around non-touch, voice activated, natural gesture computing, at an affordable price. The simple juxtapositioning of two cameras made all the difference. All you had to do was think creatively, and hack the system to get that Tom Cruise, Minority Report (The future can be seen!) action going. Will Muse have a similar impact to Kinect? Will it launch us into a new era of control technology? Time will tell, because at present Muse is still in an early stage of development, and InteraXon are speculating themselves on its potential to bring advances into the non-touch, thought control of devices.

At present, InteraXon are offering advance devices for a mere US$165, on the understanding that you test out the system for them. What is currently on offer goes in one direction only. The Muse Headband will be configured to measure your 'brain activity' and transfer an analysis to your laptop or iPad. The device will measure areas of your brain as they activate while you play a 'brain training game'.  The manufacturers claim that it will enable you to exercise your memory, measure your attention span and practice relaxation techniques. But is Muse more than simply a measuring device? Later, promise InteraXon, using the data they collect, there will be the possibility of using next generation Muse Headbands to control computers and other devices by mind power alone.

The future has a habit of creeping up on us from behind. And it does it quicker than we sometimes imagine it can. We once thought voice control was science fiction. Enhancing our senses was fine for vision, hearing, even speech. We have prosthetics for all of those. But we have carefully steered away from any mind enhancement. We didn't have the technology. We left that kind of thing to Star Wars, magic and folklore. Now it seems, we have the technology, and at the moment, mind control is right at the edge of our imagination of what technology can possibly offer. From motion sensing to mind sensing in just 7 short years? Who would have thought it? How soon before thought controlled computing becomes a reality for us all? And what then will we need to do (or to become), to adjust to the brave new world that will be upon us?

Images by InteraXon

Creative Commons License
Headline Muse by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.


December 10, 2012

Things ain't what they used to be

Not so long ago, objects were simply objects. They only came alive in Disney cartoons, or after a heavy drinking session. Most of the time, objects were simply there to be used to perform a task the user required. Now that is all about to change,  as we advance into the next phase of Web evolution. We are about to see the emergence of what Kevin Ashton called 'the Internet of Things'. In a recent blog post, Jamillah Knowles wrote that a revolution is about to begin where the objects in our homes and workplaces will become smarter, more context aware, and will be able to interpret data fed to them, before taking action. As physicist Michio Kaku wrote recently, 'now we can say to Siri, move my meeting back an hour from 3 to 4, soon we will be able to say to Siri, mow the lawn.' The difference is, at present we can use our devices to interact directly with virtual space, but with smart context aware objects surrounding us, we will be able to interact through virtual tools into the real world.

Already we have QR codes and RFID embedded into objects. These are very effective, but they are superficial compared to what comes next. The next stage, according to this generation of Internet gurus, is to embed smart chip technology, so that objects can have a conversation with our devices. Not only does that have promising implications for health care, engineering, architecture, business and entertainment, it also makes a bright future for ambient learning. Imagine a group of children going on a visit to a museum. Each is equipped with a smart phone. An app on their phones interacts with all of the exhibits in the museum. If they stand in front of a statue, or a model of a dinosaur and hold their phone up, the object will send information to the phone. The longer they stand in front of the exhibit, the more information it will feed them. When they return to their classrooms or homes later, they have a complete archive of all of the objects they have seen that day. They can use this information for projects, essays, blogs, podcasts. It can then be used in whatever content they create to show what they have learnt in the form of text, images, sounds and video. The real learning happens when the kids begin to integrate their experiences, the information they have captured and their interaction with it into creating, organing and sharing their own content.

All of this has been made possible because of the disaggregation of computer and microchip technology. In 2011, the number of smart objects connected to the Internet surpassed the number of people on the planet. This trend will accelerate exponentially in the next few years to the point where we see ubiquitous computing. No longer do we need to carry computers around with us to be able to interact with digital media. Using the smart device in our pockets, and the ubiquitous computing power that is being embedded in objects all around us, we will soon be able to learn from those objects, invest our memories inside them, and even get them to do our bidding.

Things ain't what they used to be. Things are about to get a whole lot smarter.

Photo by Rod Senna

Creative Commons License
Things ain't what they used to be by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.


December 06, 2012

Tracking sentiments

Last year I flew via Amsterdam into Cologne, Germany to give a keynote speech at a large international conference. I arrived at the airport and made my way through passport control into the baggage collection area. Along with my fellow passengers I dutifully stood, waiting at the carousel, watching as bags and cases of all sizes, shapes and colours processed slowly by. Passengers began to collect their luggage and leave. I continued to wait. Decreasing numbers of passenger waited with me as one by one, they spotted their bags, grabbed them and made off to find their transport. Soon I began to get the sneaking suspicion that my bag wasn't going to arrive. This is what I sometimes refer to as 'baggonising' and it's something I am becoming increasingly familiar with. After I had been left standing along like a spare lemon at a cocktail party for a while, I admitted defeat, and walked over to the KLM desk to ask why my bag had not appeared.

The woman behind the desk checked, and then with a straight face informed me that there had been 'technical difficulties' at Amsterdam Schipol, and that my bag would not now arrive until tomorrow evening. She asked for the name of my hotel and told me it would be delivered directly to my room. All fine and good, but there was me standing there in my jeans and sneakers, and my best suit and shirt were in my luggage. In Amsterdam. Worse, my keynote was scheduled for the following morning, which left me in something of a dilemma. To say I was furious with the airline would have been an understatement. The plane I disembarked at Amsterdam was exactly the same plane I got into again to fly onward to Cologne. I recognised the crew. I got off the plane, trotted a mile or more across Schipol Airport and then got back on to exactly the same plane, but in the meantime my bag had been removed and left who knew where.

And so I arrived at my hotel, checked in to my room and then proceeded to tweet my problem to anyone on Twitter who cared to read it. I named and shamed KLM, and then went off to find something to eat. An hour later, to my surprise, KLM responded to me on Twitter, apologising for the mix up and advising me that I should go and purchase whatever I needed, and they would foot the bill. Wonderful. Clutching my credit card, I went off and bought a new pair of shoes, two new shirts, underwear, socks, shaving kit and toiletries.  I stopped short of purchasing an expensive new suit. I was wearing a serviceable jacket and anyway, KLM would probably only increase the airfares to compensate if I blew another 1000 Euros on a Ted Baker original.

The keynote went well and my luggage duly arrived the following evening. But how did KLM know to respond to my tweet? Answer - they were scanning for mentions of KLM on Twitter and other social media. This is known as sentiment tracking, a method that may well come in useful in education in the future. I'll give you some examples of how it's used now and how it works...

The Twitter example above is a very primitive form of Sentiment Tracking and Analysis (also known as opinion mining). It simply involves a KLM staff member regularly scanning the popular social media channels to intervene if there is any bad publicity or complaint, before it blows up into something unmanageable. Several tools are available for sentiment tracking on Twitter and other social media channels. Sentiment tracking is becoming much more sophisticated. Many large business do this now, because they want to know what is being said about their brand. They know that a complaint in a public forum can have a highly negative impact on their business if it's not dealt with quicky. But sentiment tracking can also be harnessed positively by businesses. Recently I wanted to buy some black, Italian hand made slip on shoes. I visited one or two online stores, and then without purchasing, I went off to do other things. An hour later, I searched on Google for some e-learning blogs, and landed on my first page. There at the bottom of the Blogger website this advert was staring back at me:


How did the system know how to target me? The online store (Amazon) had logged my IP address, and my interest in that specific product, and the fact that I had not purchased. It had probably sent a cookie. It assumed from this that I must still be interested. At the next available opportunity, Amazon targeted me with an advert through Google Ads via Blogger. The same applies when you mention something on Facebook, or simply let slip your date of birth, location or other personal information such as hobbies and interests. Before you know it, Facebook is pushing targeted advertising to your page, and it's highly effective. Facebook logs dozens of different items of personal data from your actions every time you visit, tag a photo, post a new status update or 'like' someone's comment.

I noticed the following three adverts on my Facebook page just now: You will notice that Facebook knows I am in the UK. It knows a lot more about me than that though. The last advert is because Facebook knows I am a Manchester United fan - that little detail is there in my profile somewhere. The middle advert is because it knows I am a guitarist, again from information in my personal profile. The first advert? I'm not sure why the first is there, because I have never let it be known that I wish to illuminate something 200 metres away from me. Perhaps someone else can shed some light on this. It's not in my profile that I like to bother pilots as they land their jet airliners, or that I have aspirations to be a covert operative for MI6. Sometimes sentiment tracking gets it wrong, and sometimes it just takes a wild punt and hopes for the best, a bit like playing Internet Battleships. But it could be a lot worse. Facebook might decide to send me links to a mature women dating site, or a wholesale Viagra dealer, just for a laugh. That would be hard to explain. Sentiment tracking is usually quite accurate though, picking up on your emotional statements, likes and dislikes, conversations, as well as links you have previously clicked. Sometimes it seems to take a random guess, as with the torch. But sometimes that guess can be disturbingly accurate.


How does sentiment tracking work? At the simplest level, the system uses Natural Language Processing techniques (NLP) to mine the words you type into your status updates or query boxes. At a deeper level, artificial intelligence applications capture the NLP data and process them into clusters that have collective meaning. A lot of modeling can be done with those kinds of data. Essentially, sentiment tracking makes sense of what you do on the web, and then transforms it into recommendations, actions or in this case, advertising. There are many problems with this kind of computation, including questions over how machines can differentiate between various emotional intensities, differentiate between polarities of opinion, or detect subjectivity in a statement. However, refinements in systems will continue to improve their accuracy.

When it comes down to group behaviour, sentiment tracking can be quite accurate. As we have demonstrated with our previous research into Technosocial Predictive Analytics (TPA), using a mashup of NLP, AI, GPS and geomapping, events such as flu epidemics and social movements can be tracked and even predicted quite accurately over geographical location and time. Have you ever shopped for a book on Amazon? You select your book and then Amazon displays a message saying something like '76 people who bought this book also bought...' and you suddenly realise that there's another book you didn't know about on a similar subject to your own purchase, and now you want that book too! It's a very effective marketing ploy, but there is also enormous educational potential. Amazon is using a form of crowd sourcing for its sentiment tracking, and is selling you a book you didn't know you wanted, based on the tacit approval of a cluster of people who are similar in their tastes, profiles or backgrounds to you. In effect, the individual acts of buying books, combined, create a desire line - a slime trail of social enzymes if you will -  that can be mapped and recommended to future purchasers of similar products.

Clearly there are opportunities to harness the power of these methods in education. Imagine students being directed to new and highly useful content they were previously unaware of. Imagine new content being created automatically on the basis of the actions of like minded scholars in dispersed locations. Imagine content being changed and updated automatically, based on the activities of a global community of practice. Finally, imagine being able to track the actions, content creation and decision making of your groups of learners, and mapping these onto information graphics to track their collective and individual progress, knowing when to intervene and when to let them alone. This kind of learner analytics (or educational data mining) will emerge from the collective intelligence of crowd sourcing and the sentiment tracking of individual actions and behaviour. The technology already exists. We now have to determine whether we want this capability in education, and if we do, we next have to ask what will be the ethical, pedagogical and social implications?

In the next blog post: How Google is refining your web search

Photo by David Sky
Other images by Steve Wheeler

   Creative Commons License
Tracking sentiments by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.


December 04, 2012

The Smart eXtended Web

Will the Web recurse infinitely?
Many of us are obsessed with the future, and are constantly wondering what new technologies, trends or events will change our lives forever. The Horizon Report is one of the most eagerly anticipated reports each year by educators, because it peers down the corridor of time and attempts to predict what we can expect to see in our classrooms in the next year, 2-3 years, 5 years. People spend huge amounts of money each year gambling on the future. The average person bets on horse races or the lottery, whilst the high powered executives buy and sell stocks and shares. Some put their faith in clairvoyants, who for a price will attempt to predict your personal future for you. In the world of learning, we are obsessed with questions about where education is heading next, how work based learning will be enhanced, and more effective methods of engaging learners. Many educators have invested their trust in the use of new and emerging technologies for the future success of learning. Others have been more reticent, preferring instead to rely on the old, tried and tested methods of education and training.

Regardless of personal perspectives, our society is advancing rapidly into a technological future in which just about everything will change. Nothing short of a global disaster will stop it. We have seen the trends. Over the last 20 years, mobile phone texting has taken a significant hold on the communication habits of billions of citizens. New computer interfaces are being introduced that will supplant the ubiquitous keyboard and mouse. Soon we will control our computers using voice and gesture, even facial expressions, mood changes.

We have never been so connected as we are today. Global telecommunications mean that anyone connected can link with anyone else, hear and see them in real time, and send and receive documents at the speed of light. We carry our offices in our pockets. We increasingly do more of our shopping online, and we spend significant proportions of our working days dealing in bits rather than in atoms. We generate enough media content every day to dwarf anything previous societies could create in an entire year.

In the last decade, we have seen the liberation of the microchip from the computer. Now processing power can be embedded into any object, allowing it to be connected to the global network. This is significant, because it heralds a new kind of network made not only of knowledge and people, but a network of smart objects, an Internet of Things. Not only will our personal possessions become connected and smarter, so will our homes, our classrooms, our communities, and ultimately our cities. Yet these rapid technological changes could also be our Achilles heel. We are now so reliant on our computing power and telecommunication capability that if it were suddenly removed or disrupted, much of our familiar world would grind slowly to a halt.

The Web has changed, evolving through a number of iterations, to become increasingly prescient not only about what we wish to search for, but also the context within which it is being searched. Semantic search also takes our previous behaviour into account. Now the Web is about to get even smarter. Where Web 1.0 was about connecting content, and Web 2.0 (the social web) was about connecting people, Web 3.0 (the semantic web) will be about connecting collective intelligence. It will be the global network of distributed cognition. But just what will this emerging hive mind look like and what will we be able to do with it?

I wrote about Web 3.0 in an earlier post and speculated that the 'Smart eXtended Web' would be characterised by a number of features that included intelligent collaborative filtering of content, 3D visualisation and interaction and extended smart mobile interfaces. Now several new developments will bring these ideals to fruition, and it will happen sooner than we expected, because change is not linear, it's exponential.

Paul Groth talks about Web 3.0 in terms of what it will be able to do for us. In his paper The Rise of the Verb he explains his vision of how the web will evolve beyond the representation of knowledge in static data sets to the point where it can turn our commands into actions. Already, he writes, we can say to Siri: 'Move my meeting from 3 to 4'. In the future we will be able to say to Siri: 'Mow the lawn' and it will be done. The difference, he suggests, is that at present we can command our tools to action in the virtual world, but in the near future, with the advance of the Internet of Things and an emerging capability of the Web to interpret verbs as calls for action, we will be able to command operations in the real world too. He argues that in the next ten years we will see a web that is not only grounded in mathematical functions and definitions, but one that is also able to operate through the smart objects around, providing us with uses in the real world too. Ten years? I think it will be sooner.

In the next blog post: Sentiment tracking

Image source Fotopedia

Creative Commons License
The Smart eXtended Web by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.


December 03, 2012

Recycling learning

"...making good use of the things that they find, things that the everyday folk leave behind..."

Yep, that's a blast from the past for those who grew up watching the children's programme The Wombles on television. Essentially, the Wombles were furry creatures who lived on Wimbledon Common and tidied up all the litter left behind by the 'everyday folk'. Not only did they tidy up, they also recycled the objects they found, into something useful. We could do with a few Wombles down our street, I can tell you.  

How does this fit into education? I hear you asking.... well, read on. 

A useful concept to aid the understanding of current web based learning practices is Bricolage (Levi-Strauss, 1996). Art students will recognise it as the technique of creating an image from a variety of materials that just happen to be available. In architecture, bricolage can refer to the seemingly chaotic proximity of buildings from various periods and styles. For Levi-Strauss, bricolage described any spontaneous action, espcially those that are steeped in personal meaning. The principal meaning of bricolage however, evokes a 'do it yourself'ethos, where each individual creates personal meaning through seemingly haphazard actions that draw together disparate objects to form new wholes.

In the UK punk movement of the late 1970s, chains, safety pins and dog collars were all appropriated as fashion items, eventually assuming additional meaning as statements of personal identity. In the context of learning, bricolage is a useful analytical lens. It was applied by Seymour Papert (1993) to explain a particular style of problem solving. He suggests that bricoleurs reject traditional, systematic analyses of problem spaces in favour of play, risk taking and testing out.  Younger users of technology tend to rely less on formal instruction or user manuals when they encounter new tools. Instead, they launch into an exploration of the device, to see what it can do. They learn to use it by testing it out, and also observing their peers. These sentiments are echoed by Shelly Turkle (1995) who argues that those working in digital spaces, such as programmers, often work in a bricoleur style, working through a 'step-by-step growth and re-evaluation process', regularly spending time standing back from their work to reflect.

Many of the above traits are desirable, transferable skills for 21st Century working, and can be witnessed in the daily activities of learning on the Web. As students develop their ideas, they create content, often drawn together through a variety of search and research methods that can be disparate and seemingly unconnected. Learners draw on a wide range of content, not only from the web, but also from other media and non-media sources as they construct personal meaning. Their personal learning environments (PLEs) tend to be a bricolage of free tools, handheld devices and a personal network of friends, family and peers. Haphazard their learning might appear, but over a period of time, the various sources of their content crystalise together into accessible, meaningful and personalised learning.

In essence, today's digital learners are finding content, recycling and repurposing it, organising and sharing it. They are creating their own spaces, developing and using their own tools and apps, and generally 'making good use of the things they find'. In so doing, I believe that this current generation of learners are developing into one of the most innovative, literate and knowledgeable generations this planet has ever seen.

References
Levi-Strauss, C. (1996) The Savage Mind. London: Orion Publishing Group
Papert, S. (1993)  Mindstorms: Children, Computers and Powerful Ideas. New York: Basic Books.
Turkle, S. (1995) Life on Screen: Identity in the Age of the Internet. New York: Touchstone.

Photo by David Radcliffe

Creative Commons License
Recycling learning by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.


November 29, 2012

10 characteristics of authentic learning

I argued yesterday that authentic learning is a vital part of education in the 21st Century. The need to create learning opportunities that are grounded in reality, and form a concrete basis for real world transferable knowledge and skills has never been more important. We also need authentic assessment for learning. Too often in school classrooms around the world the delivery of content is abstract, disconnected and decontextualised. Students are then regularly tested on their recall of what they have 'learnt' and graded as successes or failures. But exactly what is their success or failure? And what does this process of assessment teach students about the school system? Part of the problem is that content is delivered, with little opportunity for students to make personal sense of that content. Another problem is that students are then expected to replicate that 'knowledge' in a form that is recognisable as the original. Students are therefore learning exactly what is already known, rather than exploring new knowledge and gaining fresh insight on the world. 

Some have previously argued that students at this stage in their education require some knowledge that they can build on. True, but how long should this priming of initial knowledge be allowed to go on? When do we begin to develop independent, autonomous lifelong learners? Authentic learning (and authentic assessment) are related not only to the knowledge students receive, but also to the knowledge production they can themselves achieve. Such learning is not instant, nor can it be achieved over a brief time period. But it can be nurtured early. Complex and iterative learning of this kind takes a lifetime of study, and is always grounded in real world experience. Reeves et al (2002) have much to say about the characteristics of authentic learning, including an emphasis on personalised learning that can be achieved through ill structured problem based learning, where meaning is negotiated within collaborative learning environments, and learning can be situated within multiple contexts and perspectives. Their list of 10 characteristics below are a very useful toolkit for any teacher who wishes to ensure that authentic learning is supported in their classroom:  
  1. Real-world relevance: Activities match as nearly as possible the real-world tasks of professionals in practice rather than decontextualized or classroom-based tasks.
  2. Ill-defined: Activities require students to define the tasks and sub-tasks needed to complete the activity. 
  3. Complex, sustained tasks: Activities are completed in days, weeks, and months rather than minutes or hours. They require significant investment of time and intellectual resources. 
  4. Multiple perspectives: Provides the opportunity for students to examine the task from different perspectives using a variety of resources, and separate relevant from irrelevant information. 
  5. Collaborative: Collaboration is integral and required for task completion. 
  6. Value laden: Provide the opportunity to reflect and involve students’ beliefs and values.
  7. Interdisciplinary: Activities encourage interdisciplinary perspectives and enable learners to play diverse roles and build expertise that is applicable beyond a single well-defined field or domain. 
  8. Authentically assessed: Assessment is seamlessly integrated with learning in a manner that reflects how quality is judged in the real world.
  9. Authentic products: Authentic activities create polished products valuable in their own right rather than as preparation for something else. 
  10. Multiple possible outcomes: Activities allow a range and diversity of outcomes open to multiple solutions of an original nature, rather than a single correct response obtained by the application of predefined rules and procedures.
How much of this is currently being achieved in our schools? What would it take for schools to adopt some or all of these approaches?

'In real life, I assure you, there is no such thing as algebra.' - Fran Lebowitz.

References 
Reeves, T. C., Herrington, J., & Oliver, R. (2002). Authentic activity as a model for web-based learning. 2002 Annual Meeting of the American Educational Research Association, New Orleans, LA, USA.

Web source
Photo by Dana Bateman

Creative Commons License
10 characteristics of authentic learning by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.


<< Back Next >>