Optimizing Assessment for All: Piloting the assessment of collaboration skills in Cambodia, Mongolia, and Nepal
Optimizing Assessment for All is developing assessments of 21st century skills for classroom use. In a recent pilot of assessment of collaboration, critical thinking and problem solving, the classroom experiences clearly identified challenges that confront teachers and students as they move toward open-ended, adaptive, and exploratory approaches to teaching and assessment.
Administering tasks in the classroom to assess collaboration can be challenging for those of us accustomed to standardised test conditions. After all, collaboration requires communication, and as we know, assessment of student achievement in the past has focussed on the individual - with the requirement that students do not communicate with each other during testing. Therefore, assessment of collaboration requires a major change in thinking, not only for teachers but for those who design assessment tasks.
In the Optimizing Assessment for All (OAA) initiative, based both in Asia with the collaboration of NEQMAP and in Africa through the hosting of TALENT, collaboration is one of the highly valued competencies that participating countries are assessing.
Assessment of collaboration requires that the test designer and test administrator focus on fairness or equal opportunity, in just the same way as test design has done in the past. However, what "fairness" looks like for collaboration is very different from fairness in individual testing. In the latter, each student is assured fairness by virtue of standardised test conditions, with each student being given the same test instructions, having the same physical resources to complete the test, same amount of time, and so on. Assuring fairness for each student in a collaborative group is not so easy. How is it possible to ensure that one student does not dominate, or lead, the others? How is it possible to ensure that each student is given the opportunity to demonstrate and develop their skills within the complexity of a dynamic group process? These are just two of the challenges faced by test designers and classroom teachers.
In the OAA initiative, over 1,000 students recently completed collaborative problem solving tasks as part of the pilot of the assessments developed by three national teams from Cambodia, Mongolia and Nepal. It is one thing to have 1,000 students sit at their desks and complete individual test papers; it is quite another to have these students working in groups of 3 or 4 to complete assessment tasks that can take up to a full lesson duration. We are all familiar with the situation described by Photograph 1, where test monitors observe students to ensure that they are on-task and not cheating! What is the role of the test monitor for a situation described by Photograph 2, where communication is an essential component of the activity? How can we ensure not only equal opportunity for students within a group, but equal opportunity for all groups undertaking the task in sometimes limited physical space?
The answers to these questions have been well informed by the OAA pilot. Following development of tasks across mathematics, science and social science by the Cambodian, Mongolian and Nepali OAA teams, the tasks were administered to Grade 5-6 students. This provided a first-hand opportunity for teachers to observe how the targeted skills can be stimulated in real group situations, and to understand the profound implications of these types of assessment for their teaching practice. Seeing the complexity of the skills required led some teachers to query whether application of these skills might be beyond the capability of all but the "outstanding students".
Preceding the Cambodian pilot, a consultative workshop with teachers from participating schools on Task Evaluation recognized that most of the content used in the tasks relied on and reflected the students' daily life. However, when considering the actual testing experience itself, many teachers were concerned about the ability of students to complete the collaboration tasks, due to their unfamiliarity and lack of experience with this new form of testing.
In the collaboration assessment sessions, students worked in groups of three or four. The task designs required that students within each group play different roles and take on different responsibilities. This means that tasks cannot be completed without all students within a group contributing, and so tests their collaborative skills. As one learning from the pilot, it became clear that different size groups for different tasks made classroom management very difficult - as students shift from one task to another. Another learning concerned how much support students needed as they engaged with the tasks. A feature of 21st century skills and their assessment, is the expectation that students should be self-directed and adaptive, rather than following basic and routine procedures. However, when students have not been familiar with such experiences in their normal classroom practice, such assessments provide unique challenges - and not only for the students! Some Test Administrators, who were themselves teachers, took action when observing the hesitancy of the students, guiding them through some of the procedural steps like forming a group, or providing explanations about the task requirements.
It also became clear to the Test Administrators that student preferences for composition of groups was an interference factor in the exercise. For example, some of the "outstanding achievement level" students preferred to work together, as did friendship groups, whereas the expectation for the pilot was that students should be randomly allocated to groups. In addition, because neither Test Administrators nor students have been accustomed to collaboration exercises, some students tended to take on "influencing" roles in the groups, rather than taking a shared responsibility approach. For example, a student who takes on a role of note-taker can easily influence the representation of group decisions according to his or her own perspective.
These issues are not unexpected. When new and unfamiliar assessment and learning activities are introduced into classroom practice, we need to understand the gaps between current norms and new approaches. This is precisely why pilots are undertaken. The information collected through the pilots will contribute to development of guidelines to support teachers' nurturing of student skills.
As the quantitative and qualitative data are collected from the Cambodia, Mongolia and Nepal pilots, more insights into the implications of introducing of 21st century skills into the classroom, through both assessment and teaching, will be gained. We look forward to sharing these insights at the next regional convening of NEQMAP in Manila in September 2019 focusing on transversal competencies.
For more information, please contact Esther Care, The Brookings Institution, email@example.com
Main photo: Cambodia Consultative Workshop)/ ©Khou Hav