This refers more to education in general, both face-to-face and online contexts and I must admit I got sidetracked while researching this topic but being a teacher myself and currently working in a sort of assessment-obsessed environment I couldn’t resist the temptation of writing more on assessment. So here you go:
ON WORKING TOGETHER (ONLINE)
There is no doubt that group work might be beneficial to learning - it is more student-centred, promotes more self-directed learning and social construction of knowledge. In more practical terms it helps students develop a range of important skills, including problem-solving, communication, leadership, collaboration and decision-making, all of them being important personal and professional assets sought by prospective employers. However, some groups prove to be dysfunctional, mostly due to the so-called ‘lone wolves’ or the phenomenon of social loafing.
Lone wolves are primarily interested in achieving their own goals while social loafing can be defined as little or lack of activity on the part of a group member who ‘shirks their obligations in the hope of benefitting from the work of others’ (Dommeyer, 2007:175). If the loafer or the free rider manages to slip unnoticed through the system they receive the same mark as the rest of their more industrious group members, which seems to be the most common reason for students’ concern, dissatisfaction and complaints (Aggarval & O’Brien, 2008; Kennedy, 2006).
Both these dysfunctionalities are likely to occur in face-to-face and online settings alike but perhaps they are more common in the latter due to lack of bodily presence and issues around community building? Besides Hron and Friedrich (2003) claim that due to its characteristics, online discussions (and I dare to speculate that by extension probably any form of online communication) might pose a number of problems, among others difficulty maintaining topic coherence and understanding the context of a message, which could instigate a number of difficulties in group activities.
For instance, based on my own experience of setting up group discussions within a virtual learning environment (Moodle), students often contribute a single post and rest on their laurels, satisfied that they have completed the task. There is usually very little or no interactive value in the post, e.g. a question or a challenging comment. When I run the activity report I often find out there is one post per student – is that a discussion or rather a series of minipresentations – I ask myself.
Such eventualities should be borne in mind while creating the tasks and perhaps more guidance in ‘global learning methods for organising group work, behaviour rules for structuring dialogues, so-called co-operation scripts’ (Hron & Friedrich, 2003: 73) should be offered. Introducing peer evaluations during the process of group work or final peer assessment could also reduce the incidence of loafing (Aggarval & O’Brien, 2008; Pond et al, 2007).
ON PEER ASSESSMENT (ONLINE)
This idea of peer assessment, as you remarked in your comment, Clara, is however both compelling and problematic.
To start with, there are obvious benefits to introducing peer assessment, such as increased motivation, engagement and accountability (Falchikov, 2005). To illustrate this, in a piece of research carried out by Bouchoucha & Woznak (2010) the engagement and interaction in an online group discussion increased upon introducing peer assessment from 1.4 to 3.6 posts per person in a single discussion. Other potential advantages include facilitation of deeper understanding of the subject matter and the student’s own achievement (Bloxham & Boyd, 2007).
Besides, in a situation where the group project entails other communication media, beyond the tutor’s control, let’s say emails, conversations on the phone or skype, chat or texts, the tutor is not really in the position to gain a good insight into the group dynamics and fabric and thus cannot make a fair judgement. The only people who are capable of assessing their own and others’ relative contributions are the students themselves (Race, 2001) but the question arises whether they are able to do so fairly and reliably.
ON PROBLEMS RELATED TO PEER ASSESSMENT
According to the reading I did on the subject, it seems there is some discrepancy in the views in that respect from quite favourable findings that students grade accurately and consistently (Marcoulides & Simkin, 1995), through stating that they mark with a slight bias to over-mark (Boud & Holmes, 1995) or under-mark (Hamer et al, 2009), to discovering that the correlation between the students’ and tutors’ marks tends to be restricted to a holistic judgement, based on well-understood criteria (Falchikov & Goldfinch, 2000). I guess these might be even more conflicting if a more thorough comparison is carried out across departments and faculties with their different grading scales, approaches and the nature of their typical assignments. If you think further of interdisciplinary degrees or post-grad degrees with students from different backgrounds, humanities and sciences, and you add online on top of that, the issue might get even more muddled (illustrated by your comment).
ON POSSIBLE SOLUTIONS
Ways of addressing the problem include involving students in the process of creating the peer assessment instrument. For example, they could select from a pre-prepared bank of questions or criteria using various scoring scales (Nicol & Milligan, 2006); alternatively they could even formulate the criteria themselves. Theoretically, in such a situation, they should be able to understand them inside out and allocate a fair mark. However, even this might be problematic as students might have varying notions as to what particular terms stand for or how achieving a given criterion translates into a particular grade. This might result in the students either over or under-marking. Clarity of the rubrics might be tested by means of a calibration process prior to launching the group project. In order to further reduce this stringency or leniency effect, each student could be assessed by all the other group members (multiple peer assessment) and the assessors could be encouraged to write more detailed comments to justify their scoring.
Hmm, it all sounds ideal but I guess in terms of preparation (creating criteria, running assessment trials) as well as implementation (students filling in the assessment forms and tutors reading them) might prove time-consuming and thus adding to the workload of everybody in question. It is also unclear to what extent the peers’ mark should influence the final grade and how the calculation should be performed.
I’d be quite interested in doing more about this as the issues of peer/self-review and assessment have been on my mind for some time now. Basically, I’d like to encourage more reflection on part of my students, deeper thinking on their performance, goals, abilities and skills. The issue of formative peer assessment springs to mind too (as suggested in the article by Aggorval and O’Brien (2009) it is multiple peer evaluations that best prevent social loafing). Something perhaps to ponder on the assessment module in the future?
Keywords: IDEL11, lone wolves, online assessment, online group work, peer assessment, peer review, social loafing
Comments
Well, you know I’m happy to join you in chats about online assessment :)
On dissatisfaction with lone wolves and free riders – have you come across: Davies, W.M. (2009). 'Groupwork as a form of assessment: common problems and recommended solutions.' Higher Education, 58, pp. 563-584. ? Davies says ‘Watkins claims students are less likely to think of themselves as suckers if they genuinely feel that they are covering for a member of the group who is unlikely to succeed by themselves. Thus, one way of minimising the sucker effect is to allow members of groups to ‘‘get to know each other better’’. (p. 567) Davies also neatly teases out the different sorts of group work that can happen. Well worth a read.
On groupwork online – hm, I suspect like f2f it depends a lot on design. For instance, in f2f tutorials you wouldn’t necessarily expect many students to speak more than in answer to a direct question from a tutor (which I guess I see as equivalent to the set task).
On peer assessment - I wonder if an alternative is this idea of a self-regulated learning alongside peer evaluation? (see e.g. David Nicol’s work, esp. the 2006 piece with Macfarlane-Dick at http://www.reap.ac.uk/Resources.aspx under ‘reap publications’.) Both depend, of course, on students understanding and being able to apply assessment criteria to their work as you noted - Did I give you the link for Dai Hounsell’s bit on connoisseurship? http://www.tla.ed.ac.uk/interchange/spring2008/hounsell2.htm
> I’d be quite interested in doing more about this as the issues of peer/self-review and assessment have been on my mind for some time now.<
Maybe we could look at it as a final assignment topic? Maybe finding a way to focus on one of the digital environments we’re exploring as a space for peer assessment, say?