Log on:
Powered by Elgg

Blog :: All

You can filter this page to certain types of posts:

Filtered: Showing posts with no comments (Remove filter)

September 25, 2008

it's 7.30 on a Friday morning, at work just seeing how this works and what it's going to look like. 

Posted by Matthew Weaver | 0 comment(s)

May 08, 2008

It was going to be my intention to keep a semi-regular-ish posts on my engagement with the "Effective Course Design for e-Learning" module. As you can see - this didn't happen!

The first week introduced us to some of the current theories around styles of course design (Toohey, 1999) and how some of them might have been adopted based upon personal preferences or, even, encouraged by external political pressures and agencies.

Weeks 2 to 4 explored the different approaches, that were:

  • Traditional or discipline-based approach
  • Performance or systems-based approach
  • Cognitive approach
  • Experiential or personal relevance approach
  • Socially critical approach

Using a combination of different readings and a wiki to collect and collate thoughts about the readings, looking for real-world examples of these different approaches in action and suggesting additional resources to help build up a coherent bank of knowledge - I found myself quite enamoured with the socially critical approach that attempted to look at a particular issue that needed to be debated and discussed with a view to making significant changes to how that issue was currently operating.

Whilst I felt that this module would be enormously valuable to me as a learning technologist who is advising and developing staff to use the University's learning systems like Blackboard to the best of their abilities - I also felt hampered that I didn't have enough traditional teaching experience to actually get to the nitty-gritty of some of the concepts and ideas that were presented. Something that would have quite a profound effect upon my assignment.

In weeks 5 to 7, my peers were put into groups (and named after fruit) to discuss, devise and develop a miniature "learning event" around a topic or theme that was of interest to us and using one or more of the approaches that we had been looking at for the past 4 weeks. The other members of the group would then take part in the "learning event" and feedback upon it. I wanted to do something that involved the socially critical approach and was rather inspired by the work done by Turnley (2005). I wanted my participants to look at the developments within the so-called "Web 2.0" phenomena and how that would impact upon and enhance their research practices - I called this concept "Research 2.0", being a pun upon how people have used the notion of versioning to try and attempt to describe something that was different (and in some cases better!).

I used the Holyrood Park Elgg site to deliver the event and asked my participants to write a little critique - whilst they said that they enjoyed it; it was debateable as to whether any actual "learning" occured. These experiences would then form the basis of the reflective report - the feedback from that report suggested to me that I was being overly ambitious with what I wanted to achieve, especially with my lack of teaching experience - so I had probably chosen an approach that was best adopted by someone with considerably more teaching experience than myself.

Week 8 looked at assessment and how that was partly defined by well constructed aims and learning outcomes. Weeks 9 to 10 covered course evaluation and course usability; again my peers could have chosen which topic to spent 2 weeks exploring in some depth.

Finally, in weeks 11 to 12, we spent that time working on our assignments which involved writing a course outline; a course rationale that explained our thinking and some semblance of a course that was constructed within some kind of learning environment. Despite the rather good mark for this assignment; I personally felt that I didn't spend enough time to do the course any justice - illness, project meetings across the country and a much needed holiday got in the way of that.

The big thing that I learnt from this module is that online courses don't start with the technology - it begins using pen, paper, a whole lot of thinking and several cups of coffee later as to what you want to try and achieve with the course and what you expect people to get out of it, in terms of what is learnt and what you want them to experience and how you challenge their thinking in the process. 

References

Moon, J., (2002). The module and programme development handbook. London: KoganPage 

Toohey, S., (1999). Designing Courses for Higher Education. Buckingham: Open University Press.

Turnley, M., (2005). Contextualized design: Teaching critical approaches to web authoring through redesign projects. Computers and Composition. 22(2), pp. 131-148.

Posted by Wayne Barry | 0 comment(s)

March 05, 2008

THis is the first blog post in Holyrood Park for me - just to test out how it all works Tongue out

Posted by Ruby Rennie | 0 comment(s)

January 29, 2008

Whilst almost all official mention of these tests has disappeared from the Internet, they remain in my thoughts*. The tests only reached the pilot phase of development, and I was lucky enough to be placed in one of the schools that had been ear-marked for testing. They were aimed at year 9 pupils, or those at the end of their compulsory Key Stage 3 ICT programmes.

This test was a first for many schools as it delivered the exam in the form of an on-screen assessment. The software was a mock up of a traditional GUI interface such as Windows XP or Mac OS X. Pupils received their test questions ‘via email’ (time released by the software) in the built in email client. The aim was to put to use all the skills they should have learnt during their 3 years of ICT classes.

As recommended by Bull and McKenna (2003), the students were exposed to the new software environment up to 7 weeks before the tests for around an hour per week (more if they wanted) in order to familiarise themselves with the layout of the applications and the available tools. Many (senior) teachers criticised the tests as they thought it criminal not to test students in the environment that they had learnt in – in my school this was Windows XP. They assume that will be the only environment that the students will use outside of school in the workplace, therefore question the need to ever learn a new environment. This simplistic view of ICT from senior management (and curricula level) is one of the reasons I have moved away from teaching it this year. If anything, the use of a new software environment helped us (as teachers) to identify those students who had been learning surface level routines in XP, rather than a deep understanding of what they were actually doing.

From the reading I have already conducted around this topic I can tell that this environment was an innovation in CAA. Firstly, it shied away from the traditional MCQs and Boolean questions that would be expected, in favour of contextualised tasks. This was made possible by the sophistication of the assessment system, which was also to check the file structures and contents at the end of the exam and keep a record of how students performed specific tasks. One downside was that there was no instant feedback for the teacher or student as the marks had to be independently moderated for anomalies.

Just before the final pilot went ahead in May 2007, it was announced that the tests was not be introduced compulsorily as expected for summative exams, instead they would be made available for formative assessment as and when teachers and students were ready. Probably a wise decision as there would be no real benefit for this kind of summative assessment for the pupils’ learning or the teachers’ teaching. The primary role of the summative test would be to provide accountability and fuel more league table competitiveness.

In summary – this is a good and useful assessment technology which was partially introduced with less than useful intentions, but held its own in pilot testing on a fairly wide scale. This type of software will be a useful assessment tool for early stages of ICT education.

 

* One of the remaining official documents can be found on a secondary school server here

Keywords: CAA, IIOA, QCA KS3 ICT on-screen assessments

Posted by Stuart Easter | 0 comment(s)

January 26, 2008

On my quest to discover ‘what is online assessment’ I have read the first two chapters of Bull and McKenna’s seminal work on Computer Assisted Assessment, as they define it. At first glimpse this title, and the content of their book does not appear too relevant to my study as the word ‘assisted’ implies a weak influence. In fact, on further reflection, the majority of online assessment is ‘assisted’ as humans still maintain control over various factors, such as question content, assessment times and formats.

They note that some of the most common forms of online assessment are MCQs and Boolean options. Whilst I have experienced these methods both as a teacher and a student, I have also submitted assignments that I have produced on my computer through either email or a VLE for assessment by a teacher – surely this constitutes a form of online assessment? I have also participated in one of the QCA KS3 ICT pilot tests as a teacher. These tests provided new software environments for students to perform tasks based on what they had learnt in their classes. The tasks were recorded by the software using mouse click tracking and file scanning at the end, but were also submitted to human moderators for ‘authentication’. Hopefully I will find time to post more about this experience as part of this 2-week block.

They offer a concise list of reasons for using CAA:

1.    To increase frequency of assessment thereby motivating students to learn and encouraging students to practise skills.
2.    To broaden range of knowledge assessed.
3.    To increase feedback to students and teachers
4.    To extend the range of assessment methods
5.    To increase objectivity and consistency
6.    To decrease marking loads
7.    To aid administrative efficiency.

This list is fairly logical, although with every positive there is a potential negative (as with most things!). Increased frequency of assessment could lead to pupil agitation and subsequent disengagement. A broader range of assessment could mean subjects don’t get assessed as in depth as previously. Increased feedback could overload and confuse students (a bit far-fetched I suppose). An increased range of assessment tools could definitely confuse students who are used to alternative assessment methods and it could even put a different skew on results as would be expected when assessed using different methods. If questions are tailored towards more objective subjects then some topics could be overlooked. If teachers are further removed from the marking process they run the risk of misinterpreting the results. I’m struggling to think of the negative side of increased administrative efficiency – but I’m fairly sure that’s not too connected to the learning that is taking place.

Bull and McKenna make some sensible observations about how to go about using CAA. Is it actually appropriate? More on this with some other readings hopefully (e.g. Brosnan, M. (1999). Computer anxiety in students: should computer-based assessment be used at all?). A couple of other sensible points: “CAA objective tests should only be used as one of a number of assessment methods….The implementation of a learning technology should be integrated with the structure and delivery of a course.” It is also important to guard against testing IT skills rather than subject content.

In the second chapter the authors touch on a number of issues raised by the e-assessment conundrum. A big concern here is the interface between the software development industry and education. Could CAA technology influence (or even determine) pedagogic practise by including certain question formats and enabling specific feedback formats? Furthermore, the cost of various CAA softwares might determine which products are used and therefore determines the types of test formats that are available.

“CAA enables collection of detailed data on formative activities – but this should be balanced against surveillance concerns raised by Land and Bayne (2002).”

Electronic literacy: CAA is more than assessment of subject expertise, also understanding how online environments mediate and even construct knowledge. Rather than traditional linear texts, students are exposed to “visual literacy” (Kress, 1998): the logic of simultaneous presence of a number of elements and their spatial relation to each other – a core issue that is somewhat addressed by the Prensky (2001) and Monereo (2004).

Keywords: bull and mckenna, caa, computer assisted assessment, IIOA

Posted by Stuart Easter | 0 comment(s)

January 20, 2008

I liked this paper's no-nonsense approach (and it was written in a style I could access easily). Several good bits of content:-

From the paper:-

Smile The essence of the challenge for all educators in the 21st century is to get the learners to:-

  • read more widely
  • see more clearly
  • think more clearly
  • (why am I thinking of the song "Day By Day"?)
  • challenge authority on every occasion
  • more importantly get learners to challenge themselves

Smile The aim is to promote the free-flow of information and ideas in the interest of all and to promote a thriving culture, economy and democracy.

Smile Information Literacy is the ability to deal with complexities of the current information environment - it must

  • subsume all the skill-based literacies but not be restricted by them
  • not be restricted to any one technology / technology group
  • centre around understanding, meaning and context

Smile So much e-learning remains as e-teaching (the provision of lecture material online) - is this due to poor information literacies amonst the tutors?

Smile The "information literate" are those who know when they need information and are able to identify, locate, evaluate, organise and effectively use the information to address and resolve problems

Undecided The Australian Information Literacy Standards

An information literate individual has learned how to learn and is able to:-

  1. recognise a need for information
  2. determine the extent of the information needed
  3. access the needed information efficiently
  4. evaluate the information and its sources
  5. incorporate selected information into their knowledge base
  6. use information effectively to accomplish a purpose
  7. understand the economic, legal, social and cultural issues around the use of information
  8. access and use information ethically and legally
  9. classify / store / maipulate the information generated
  10. recognise information literacy as a pre-requisite for lifelong learning

Keywords: information literacies technology fluency Bundy

Posted by Andrew Miller | 0 comment(s)

I sorry but I found this paper rather dull although it did contain some little gems of information I could use.

Firstly, Barrett attests that most graduates did not have a clear sense of their research aims at the start of the process - they fumbled about and were guided by colleagues, tutors and supervisors. This is so good to hear as I am usually in the same boat. The important thing here is that this is probably when most of the searching of libraries and whatnot occurs - so that searching can at bet be unfocussed and at worst be blind fishing. Without good IL skills the period of fuzziness is probably an awful lot longer than it needs to be.

The second little gem was that most students lack personal collections and substantial subject expertise. Again, I thought I was alone but so many people I have spoken to lack a personal collection or just have haphazard piles of documents in cupboards or piled on desks. From this knowledge I feel I can make best use of the web-based personal catalogues offered by del.icio.us, Connotea, Furl It, Zimbio and the like. All the tools are there - we just don't use them. I shall catalogue all my piles of paper.

Keywords: information literacies seeking catalogues

Posted by Andrew Miller | 0 comment(s)

Hellfire!

What a paper to start us off on! It was like pulling teeth but I got there in the end I think. A good (content) opener for the course as it provided so much food for thought.

Intertextuality has to exist otherwise we would have to write everything de novo each time - scientific advances would be limited to the lifespan of any one scientist.

Newspapers of ten translate the "official" laguage of politicians and the like into the vocabulary of the the day-to-day spoken word (or rather the newspaper's interpretation of the spoken word). Why do they have to do this? Is it that "official" language is not digestable by the masses or are we losing the ability to understand "proper" vocabulary? I fear I do not know the answer to this!

From the paper:-

Smile Many non-commodity institutions are being drawn more and more into the commodity model and the matrix of consumerism - they are under pressure to "package" their "commodities" and "sell" them to "consumers".

Smile Presuppositions (based on prior texts of the text-producers or by other texts) can be manipulative as well as sincere - they are a good way of manipulating people as they are very difficult to challenge.

Smile A genre is not only a particular text type but a particular process of producing, distributing and consuming that text

Smile A discourse is a particular way of constructing a subject matter. E.g. Medicine is an area of knowledge constructed from a technological and scientific perspective unlike that of "alternative medicine"

Keywords: language culture communication intertextuality fairclough

Posted by Andrew Miller | 0 comment(s)

Although this paper was a good read I do feel that it took an awfully long time to say not a lot.

Reading the paper did improve my understanding of sequential and cultural contexts in speech utterances and the importance of considering these when analysing dialogues.

Understanding the relationships between conversation participants helps understand the conversation through analysing the dialogue - are the participants on an equal footing or does one have some sort of superiority over another? This would change the giving and receiving of an utterance.

From the paper:-

Smile There is no point looking at a single utterance without considering their place in the local sequence of utterances and there is no point just looking at their sequential place if the contextual details are available. Contextual knowledge is a luxury though

Smile The analyst must know the cultural as well as the sequential rules for the use of certain utterances to correctly analyse the dialogue

Keywords: language culture communication context sequential McHoul Rapley Antaki

Posted by Andrew Miller | 0 comment(s)

January 19, 2008

I enjoyed this paper a lot more than I thought I would - although it got a bit techie in some areas I think I got a lot ot of it - mainly the highlighting that any discourse is a product of its participants. Those participants bring to that discourse their own expectations and histories, what had led them to have tose expectations, and external influences such as institutional / social policies and discourses.

Reading this paper has made me quite excited about doing some actual discourse analysis. I know I've got a lot more reading to do first but I think I'm starting to understand the complexity of the subject and intend to have a fisrt bash at things quite soon - I think I'll record one of my sessions at work next week and see what I can do about analysing it.

Good things I got from the paper - the actual process

Analysis of the teacher-pupil discourse

  • Looked at how the teacher and all of the pupils interacted
  • Was there any encouragement / discouragement? What forms did these take?
  • How was discourse encouraged / discouraged? Did these change from pupil to pupil / over the time of the study?
  • How much did each pupil talk and did this change over the course of the study?
  • Was the students' talk "useful"? Did it use the vocabulary of the subject or help others in the class forward their understanding of the subject?

Analysing the teacher's intentions within the discourse

  • Why did the teacher behave the way she did in the classroom?
  • Where some pupils encouraged / discouraged more than others? If so then why?
  • What were the teacher's expectations of the discourse and were these satisfied? Waht are the sources / aspects which have defined the teacher's expectations?
  • How controlling was the teacher in the classroom? Did this have an effect on the discourse?
  • What assuptions were made by the teacher in the classroom and did these have an effect on the overall discourse?
  • Are there any sorts of pressures acting on the teacher which could have / did have an effect on her in the classroom/ If so, how did they manifest themselves?

Alignment of teacher's intentions with policy / institutional discourse

  • Did the teacher's actions support the policies of the institution / society or not?
  • How did policies and institutional discourses manifest themselves in the classroom and did they affect the classroom discourse in a beneficial / detrimental way?
  • Did the teacher manage to achieve or advance the curriculum requirements or not? How? Why?

So much food for thought!

Keywords: language culture communication discourse analysis black classroom

Posted by Andrew Miller | 0 comment(s)

<< Back Next >>