In an article by Donald H. Taylor in the December 2008 issue of inside learning technologies the author describes the process by which the early LMS systems were designed: “How did the vendors or the market researchers they employed, guess what functionality to include in their LMS1.0? They asked potential clients.” This process, carried out in the late 90s was applied to most other types of software (apart from those following a worse process – ask the developers to build them wherever they can). According to Taylor the result of this process was a “functionality wish-list based on solving today's issues piecemeal, not building something better for the future.” To me, this is the embodiment of: “you don't know what you don't know” – we invent things, solve problems and imagine based on what we know and experience: as a result we miss out on “the trick of doing things better or differently.”
Taylor provides the following table as an example:
| PAPER-BASED APPROACH | TECHNOLOGY 'REPLICATION' | TECHNOLOGY EXTENSION |
| Provide manuals for instruction in the classroom. | Provide the same manuals in PDF online with no tutor support. | Divide resources into: A. Reference manuals, cross-referenced and with great searching B. Easily searchable instruction manuals C. EPSS/help systems D. Provide people for help were they can have the most effect. |
| Collect paper evaluation forms after every class and analyse obsessively. | Collect electronic evaluation forms after every class and analyse obsessively. | Forget evaluation forms and instead identify skill gaps prior to learning. |
| Collect paper-based performance / competency information once a year during an annual review and do little with it. | Do the same, but electronically. | Do the same, but use Internet technologies to make the information always in view and always linked to performance. |
I believe that VLEs have suffered from a similar fate to that of the corporate LMS. However, we are now starting to see systems that use technology to extend, not fix, our paper-based everyday life. This 'extension' is critical to enabling learning in the information age where access to and contextualisation of information transforms it into knowledge.
These new 'systems' are not actually systems at all – the concept of content aggregation allows the user to 'pull' specific pieces of information and connect them together into a context relevant to him / her. Content aggregation systems have also gone from replication to extension; As explained in the mash-up wiki the two types of content aggregation systems – portals and mash-ups are different in that portals allow you to display information from different sources in the same way that this was exposed. In essence this is technology replication: by 'cutting out' the pieces of information that interests us (in the form of RSS feeds and similar) and glue them together on the same sheet we create the portal. On the other hand, the mash-up is actually technology extending an everyday task: it allows a user to take the raw data behind webpages (and other sources) and to re-contextualise it.
An excellent example of the more advanced content aggregation is the recent mash-up of tweets and Google maps: tweets that scored local snow out of 10, gave their postcode and used the tag uksnow were mapped on to a Google map of the UK essentially transforming raw data (tweets) into information by means of contextualisation. This can be found at: #uksnow Map 2.0 (see screenshot).
Moving forward, there is no doubt in my mind that the solution is, as mentioned by Wilson et al, standardisation. However, it is very important that we do not choose a rigid, limiting set of standards which are based on APIs and ensure that only students who have programming abilities can create a personalised environment. The answer to this could easily be products like Yahoo! Pipes or Apatar which rely on a visual model to facilitate the mashing-up. The one point on which I disagree with Wilson at al is that this is not the future: mashing up is happening every day by users who are not programming savvy. In fact, the ability to contextualise without external intervention opens up an additional option which current systems do not deal with – informal learning.
Tying the student's ability to contextualise raw data using today's e-portfolios will eventually lead to the more students centric approach that Ayala is pushing for. Personal development plans will define the path and competencies and skills rather than exam results will be the outcomes. This will of course mean that the targets system today in place will need to be abolished – that will happen eventually if not through intelligent governing than as a result of pressure from the corporate sector that needs capable employees.
At the end of the day, and probably in the same manner as they did in every century, schools and universities will have to equip students with the tools to learn and continue learning. From the introductory: enabling students to read through the basic – understanding how to access websites and how to evaluate their content and on to the intermediate: creating basic mash ups to the advanced: manipulating data in its raw form. Being the information technology the digital natives of tomorrow will move within the scope of consuming digital information and creating raw data (coming from researchers in universities and corporate). Anyone lacking the tools to process the never ending sea of data will be consigned to an underclass equivalent to today's illiterates.
Keywords: IDELJAN10
