Whilst almost all official mention of these tests has disappeared from the Internet, they remain in my thoughts*. The tests only reached the pilot phase of development, and I was lucky enough to be placed in one of the schools that had been ear-marked for testing. They were aimed at year 9 pupils, or those at the end of their compulsory Key Stage 3 ICT programmes.
This test was a first for many schools as it delivered the exam in the form of an on-screen assessment. The software was a mock up of a traditional GUI interface such as Windows XP or Mac OS X. Pupils received their test questions ‘via email’ (time released by the software) in the built in email client. The aim was to put to use all the skills they should have learnt during their 3 years of ICT classes.
As recommended by Bull and McKenna (2003), the students were exposed to the new software environment up to 7 weeks before the tests for around an hour per week (more if they wanted) in order to familiarise themselves with the layout of the applications and the available tools. Many (senior) teachers criticised the tests as they thought it criminal not to test students in the environment that they had learnt in – in my school this was Windows XP. They assume that will be the only environment that the students will use outside of school in the workplace, therefore question the need to ever learn a new environment. This simplistic view of ICT from senior management (and curricula level) is one of the reasons I have moved away from teaching it this year. If anything, the use of a new software environment helped us (as teachers) to identify those students who had been learning surface level routines in XP, rather than a deep understanding of what they were actually doing.
From the reading I have already conducted around this topic I can tell that this environment was an innovation in CAA. Firstly, it shied away from the traditional MCQs and Boolean questions that would be expected, in favour of contextualised tasks. This was made possible by the sophistication of the assessment system, which was also to check the file structures and contents at the end of the exam and keep a record of how students performed specific tasks. One downside was that there was no instant feedback for the teacher or student as the marks had to be independently moderated for anomalies.
Just before the final pilot went ahead in May 2007, it was announced that the tests was not be introduced compulsorily as expected for summative exams, instead they would be made available for formative assessment as and when teachers and students were ready. Probably a wise decision as there would be no real benefit for this kind of summative assessment for the pupils’ learning or the teachers’ teaching. The primary role of the summative test would be to provide accountability and fuel more league table competitiveness.
In summary – this is a good and useful assessment technology which was partially introduced with less than useful intentions, but held its own in pilot testing on a fairly wide scale. This type of software will be a useful assessment tool for early stages of ICT education.
* One of the remaining official documents can be found on a secondary school server here