Log on:
Powered by Elgg

Jo Trumpeneers :: Blog

July 14, 2009

i hope this links up ok

Posted by Sian Bayne | 0 comment(s)

June 11, 2009

linking from hp

Keywords: linking testing storytlr

Posted by Sian Bayne | 0 comment(s)

January 29, 2008

Whilst almost all official mention of these tests has disappeared from the Internet, they remain in my thoughts*. The tests only reached the pilot phase of development, and I was lucky enough to be placed in one of the schools that had been ear-marked for testing. They were aimed at year 9 pupils, or those at the end of their compulsory Key Stage 3 ICT programmes.

This test was a first for many schools as it delivered the exam in the form of an on-screen assessment. The software was a mock up of a traditional GUI interface such as Windows XP or Mac OS X. Pupils received their test questions ‘via email’ (time released by the software) in the built in email client. The aim was to put to use all the skills they should have learnt during their 3 years of ICT classes.

As recommended by Bull and McKenna (2003), the students were exposed to the new software environment up to 7 weeks before the tests for around an hour per week (more if they wanted) in order to familiarise themselves with the layout of the applications and the available tools. Many (senior) teachers criticised the tests as they thought it criminal not to test students in the environment that they had learnt in – in my school this was Windows XP. They assume that will be the only environment that the students will use outside of school in the workplace, therefore question the need to ever learn a new environment. This simplistic view of ICT from senior management (and curricula level) is one of the reasons I have moved away from teaching it this year. If anything, the use of a new software environment helped us (as teachers) to identify those students who had been learning surface level routines in XP, rather than a deep understanding of what they were actually doing.

From the reading I have already conducted around this topic I can tell that this environment was an innovation in CAA. Firstly, it shied away from the traditional MCQs and Boolean questions that would be expected, in favour of contextualised tasks. This was made possible by the sophistication of the assessment system, which was also to check the file structures and contents at the end of the exam and keep a record of how students performed specific tasks. One downside was that there was no instant feedback for the teacher or student as the marks had to be independently moderated for anomalies.

Just before the final pilot went ahead in May 2007, it was announced that the tests was not be introduced compulsorily as expected for summative exams, instead they would be made available for formative assessment as and when teachers and students were ready. Probably a wise decision as there would be no real benefit for this kind of summative assessment for the pupils’ learning or the teachers’ teaching. The primary role of the summative test would be to provide accountability and fuel more league table competitiveness.

In summary – this is a good and useful assessment technology which was partially introduced with less than useful intentions, but held its own in pilot testing on a fairly wide scale. This type of software will be a useful assessment tool for early stages of ICT education.

 

* One of the remaining official documents can be found on a secondary school server here

Keywords: CAA, IIOA, QCA KS3 ICT on-screen assessments

Posted by Stuart Easter | 0 comment(s)

January 26, 2008

On my quest to discover ‘what is online assessment’ I have read the first two chapters of Bull and McKenna’s seminal work on Computer Assisted Assessment, as they define it. At first glimpse this title, and the content of their book does not appear too relevant to my study as the word ‘assisted’ implies a weak influence. In fact, on further reflection, the majority of online assessment is ‘assisted’ as humans still maintain control over various factors, such as question content, assessment times and formats.

They note that some of the most common forms of online assessment are MCQs and Boolean options. Whilst I have experienced these methods both as a teacher and a student, I have also submitted assignments that I have produced on my computer through either email or a VLE for assessment by a teacher – surely this constitutes a form of online assessment? I have also participated in one of the QCA KS3 ICT pilot tests as a teacher. These tests provided new software environments for students to perform tasks based on what they had learnt in their classes. The tasks were recorded by the software using mouse click tracking and file scanning at the end, but were also submitted to human moderators for ‘authentication’. Hopefully I will find time to post more about this experience as part of this 2-week block.

They offer a concise list of reasons for using CAA:

1.    To increase frequency of assessment thereby motivating students to learn and encouraging students to practise skills.
2.    To broaden range of knowledge assessed.
3.    To increase feedback to students and teachers
4.    To extend the range of assessment methods
5.    To increase objectivity and consistency
6.    To decrease marking loads
7.    To aid administrative efficiency.

This list is fairly logical, although with every positive there is a potential negative (as with most things!). Increased frequency of assessment could lead to pupil agitation and subsequent disengagement. A broader range of assessment could mean subjects don’t get assessed as in depth as previously. Increased feedback could overload and confuse students (a bit far-fetched I suppose). An increased range of assessment tools could definitely confuse students who are used to alternative assessment methods and it could even put a different skew on results as would be expected when assessed using different methods. If questions are tailored towards more objective subjects then some topics could be overlooked. If teachers are further removed from the marking process they run the risk of misinterpreting the results. I’m struggling to think of the negative side of increased administrative efficiency – but I’m fairly sure that’s not too connected to the learning that is taking place.

Bull and McKenna make some sensible observations about how to go about using CAA. Is it actually appropriate? More on this with some other readings hopefully (e.g. Brosnan, M. (1999). Computer anxiety in students: should computer-based assessment be used at all?). A couple of other sensible points: “CAA objective tests should only be used as one of a number of assessment methods….The implementation of a learning technology should be integrated with the structure and delivery of a course.” It is also important to guard against testing IT skills rather than subject content.

In the second chapter the authors touch on a number of issues raised by the e-assessment conundrum. A big concern here is the interface between the software development industry and education. Could CAA technology influence (or even determine) pedagogic practise by including certain question formats and enabling specific feedback formats? Furthermore, the cost of various CAA softwares might determine which products are used and therefore determines the types of test formats that are available.

“CAA enables collection of detailed data on formative activities – but this should be balanced against surveillance concerns raised by Land and Bayne (2002).”

Electronic literacy: CAA is more than assessment of subject expertise, also understanding how online environments mediate and even construct knowledge. Rather than traditional linear texts, students are exposed to “visual literacy” (Kress, 1998): the logic of simultaneous presence of a number of elements and their spatial relation to each other – a core issue that is somewhat addressed by the Prensky (2001) and Monereo (2004).

Keywords: bull and mckenna, caa, computer assisted assessment, IIOA

Posted by Stuart Easter | 0 comment(s)

January 22, 2008

In this section of the module I will look at how assessments have moved online and what the impacts of this transition have been. It is important to distinguish between online assessments of a face-to-face based course and the online assessment of an online course. I think that it is the case that there are currently more online assessments than online courses and hopefully throughout this topic we will begin to understand the reasons and motivations for this. I will also hopefully get time to briefly review a couple of the technologies that make online assessment possible.

Initial readings for this topic:

Bull, J and McKenna, C (2003) Blueprint for Computer Assisted Assessment (chapter 1-2)

John A. Ross & Maura Ross (2005) Student Assessment in Online Courses: Research and Practice, 1993–2004. Canadian Journal of University Continuing Education
Vol. 31, No. 2.

Scalise, K. et al. (2006) Assessment for e-Learning: Case studies of an emerging field. 13th International Objective Measurement Workshop.

Ridgway, J. (2004) Literature Review of E-assessment. FutureLab. Available at: http://www.futurelab.org.uk/resources/documents/lit_reviews/Assessm

Further suggestions about what to read or what assessment tools to review are always welcome.

Keywords: computer assisted assessment, e-assessment, IIOA, potential readings

Posted by Stuart Easter | 1 comment(s)

December 19, 2007

this post is public! Laughing

Posted by Sian Bayne | 2 comment(s)