In an article by Donald H. Taylor in the December 2008 issue of inside learning technologies the author describes the process by which the early LMS systems were designed: “How did the vendors or the market researchers they employed, guess what functionality to include in their LMS1.0? They asked potential clients.” This process, carried out in the late 90s was applied to most other types of software (apart from those following a worse process – ask the developers to build them wherever they can). According to Taylor the result of this process was a “functionality wish-list based on solving today's issues piecemeal, not building something better for the future.” To me, this is the embodiment of: “you don't know what you don't know” – we invent things, solve problems and imagine based on what we know and experience: as a result we miss out on “the trick of doing things better or differently.”
Taylor provides the following table as an example:
| PAPER-BASED APPROACH | TECHNOLOGY 'REPLICATION' | TECHNOLOGY EXTENSION |
| Provide manuals for instruction in the classroom. | Provide the same manuals in PDF online with no tutor support. | Divide resources into: A. Reference manuals, cross-referenced and with great searching B. Easily searchable instruction manuals C. EPSS/help systems D. Provide people for help were they can have the most effect. |
| Collect paper evaluation forms after every class and analyse obsessively. | Collect electronic evaluation forms after every class and analyse obsessively. | Forget evaluation forms and instead identify skill gaps prior to learning. |
| Collect paper-based performance / competency information once a year during an annual review and do little with it. | Do the same, but electronically. | Do the same, but use Internet technologies to make the information always in view and always linked to performance. |
I believe that VLEs have suffered from a similar fate to that of the corporate LMS. However, we are now starting to see systems that use technology to extend, not fix, our paper-based everyday life. This 'extension' is critical to enabling learning in the information age where access to and contextualisation of information transforms it into knowledge.
These new 'systems' are not actually systems at all – the concept of content aggregation allows the user to 'pull' specific pieces of information and connect them together into a context relevant to him / her. Content aggregation systems have also gone from replication to extension; As explained in the mash-up wiki the two types of content aggregation systems – portals and mash-ups are different in that portals allow you to display information from different sources in the same way that this was exposed. In essence this is technology replication: by 'cutting out' the pieces of information that interests us (in the form of RSS feeds and similar) and glue them together on the same sheet we create the portal. On the other hand, the mash-up is actually technology extending an everyday task: it allows a user to take the raw data behind webpages (and other sources) and to re-contextualise it.
An excellent example of the more advanced content aggregation is the recent mash-up of tweets and Google maps: tweets that scored local snow out of 10, gave their postcode and used the tag uksnow were mapped on to a Google map of the UK essentially transforming raw data (tweets) into information by means of contextualisation. This can be found at: #uksnow Map 2.0 (see screenshot).
Moving forward, there is no doubt in my mind that the solution is, as mentioned by Wilson et al, standardisation. However, it is very important that we do not choose a rigid, limiting set of standards which are based on APIs and ensure that only students who have programming abilities can create a personalised environment. The answer to this could easily be products like Yahoo! Pipes or Apatar which rely on a visual model to facilitate the mashing-up. The one point on which I disagree with Wilson at al is that this is not the future: mashing up is happening every day by users who are not programming savvy. In fact, the ability to contextualise without external intervention opens up an additional option which current systems do not deal with – informal learning.
Tying the student's ability to contextualise raw data using today's e-portfolios will eventually lead to the more students centric approach that Ayala is pushing for. Personal development plans will define the path and competencies and skills rather than exam results will be the outcomes. This will of course mean that the targets system today in place will need to be abolished – that will happen eventually if not through intelligent governing than as a result of pressure from the corporate sector that needs capable employees.
At the end of the day, and probably in the same manner as they did in every century, schools and universities will have to equip students with the tools to learn and continue learning. From the introductory: enabling students to read through the basic – understanding how to access websites and how to evaluate their content and on to the intermediate: creating basic mash ups to the advanced: manipulating data in its raw form. Being the information technology the digital natives of tomorrow will move within the scope of consuming digital information and creating raw data (coming from researchers in universities and corporate). Anyone lacking the tools to process the never ending sea of data will be consigned to an underclass equivalent to today's illiterates.
Keywords: IDELJAN10
Comments
Even there... babies have to process a "never ending sea of data" to make sense of the world. Making sense of the media is a newer phenomenon, but still dates back decades if not centuries. The flow of media information is speeding up and more compressed nowadays, perhaps, but even pre-Internet, any child in a school library or reading their parents' weekend newspaper was confronted with a (relatively speaking) never-ending sea of data. Kids dealt with it by ignoring some or most of it; many will continue to do the same when it comes to information online.
No – I think it will be worse. Being able to read opens up new horizons for poor communities (learning, jobs etc)... being able to use the net in the information economy is akin to communication. As this is the case people who are not electronically literate will not only be missing these new horizons but will, in fact, be completely out of the game. I think this is the equivalent of the immigrants in the UK who do not speak English – they can survive but seldom do they thrive.
I agree completely that babies, and indeed people of all ages, process massive amounts of raw data every second (most of it unconsciously) and are very good at it. However, not having the tools required to process the raw data (or even access it) into information, is what puts the individual outside the economic cycle (especially acute when we're talking about the information economy).
According to the BBC, “More than three-quarters of people across the world believe access to the Internet is a fundamental right” the problem is that having this right but not the skills to use it is like having your house connected to the main power supply but having no sockets on the walls. In fact - some of the simplest jobs out there today require knowledge of operating computers even if they do not require any vocational or academic certificates. So going back to my original point: not having the information related skills will create an underclass in this type of economy.
Oh, I quite agree that an underclass will emerge in relation to these skills - my point is just that illiteracy itself condemns one to an underclass even more profoundly. Even without IT skills, you can still do a lot if you can read; that amount you can do might reduce as IT becomes more and more a requirement in achieving everyday tasks, but you'd still be ahead of the illiterate.
Or maybe the illiterate will be able to leapfrog the IT-illiterate once technology becomes driven by video, audio and icons, and texts hardly come into it anymore... then things could get really interesting. (& disturbing)
To get a driver license you need to be computer literate today - part of it is on a PC. In Nigeria where electricity is intermittent (about 3 hours a day) they look at computer (or perhaps tech) literacy as equivalent to reading (and maybe more) - they are now looking to mobile phones to breach the gap. The things that you can do with reading will not get you out of the poverty circle.
I think at the end of the day you agree with me - the point about leapfrogging means that people from the underclass will breach the gap and be ahead. This situation is currently in progress with Russia and India where one can find very cheap and knowledgeable developers. The next place this will happen is probably China.
The point is that for the illiterates to have leapfrogged the literates they have to be literate first - no matter how they get there (or how easy it is to be literate).
There are many discussions looking at language as liberating or limiting. In the long term, when we are hooked up to computers, literacy in general will not be relevant. Luckily (my personal opinion) we are still not there...