This website is no longer being updated but remains for the convenience of users and as a matter of record

  What Works - The Work Program

Icon Note

Test what you want taught

One of the elements of the INISSS Project was to trial innovative assessment and reporting procedures. Rosemary Callingham and Professor Patrick Griffin were engaged to assist with this task and produced a series of papers describing the nature and outcomes of their work. What follows has been extracted from these papers.

Both the qualitative and quantitative findings from the INISSS Project suggest that the use of an alternative but rigorous form of assessment, which matches productive teaching strategies, can play an important role in helping to improve learning outcomes for Indigenous students.

The assessment tasks developed for INISSS closely match particular approaches to teaching (sometimes described as 'working mathematically'), and there is strong evidence that students' learning behaviour changed as a result.

But the introduction of alternative assessment to a group of teachers committed to ideas of inclusive practice and equity was not easy. Nor did acceptance of the ideas take place quickly. The process showed that teachers needed time and support to come to terms with a form of assessment that was both demanding of their expertise and challenged their perceptions of assessment. Rosemary Callingham presented a conference paper about these matters.

How do you go beyond right and wrong answers to assess mathematical ability? A series of successful ideas evolved.


The nature of the assessment tasks and procedures

The theory

In very simple terms, the schema underlying the assessment tasks is the assumption that, as their mathematical cognitive capabilities advance, students move from pattern recognition > pattern use > rule recognition > rule application > rule extension > developing and applying relationships and generalisations.

These elements can be mapped within a topic and thus incorporated in the question/task framing process.

In addition, following a review of trial outcomes, a match was evident between levels of ability and the 'productive' form of response. Students at the lower end tended to provide verbal explanations in 'natural' language, at the mid range written explanations in 'natural' language and, at higher levels of ability, written explanations in symbolic (or 'mathematical') language. These skills could be recognised and reflected in the scoring process.

As noted below, there were particular features of administering the tasks which were unconventional, reflecting the desire to limit the impact of 'test taking' procedure and find out as much as possible about students' actual mathematical understanding and capabilities.


The tasks

Each task was presented in a similar format to the Task Centre materials, with instructions and materials provided in a reseal plastic bag. Students were provided with a record sheet for each task.

Each task had between eight and ten individual questions, each of which had a scoring rubric based on anticipated responses based on experience. This was modified after trialling. This provided a standard format for collecting the assessment information, but did make the tasks more structured than those of the Task Centres.

The questions were ordered from least to most difficult, except where the order interfered with the natural teaching and learning structure of the task. Overall, the tasks were more open than traditional assessment materials and had the 'flavour' of the Task Centre teaching tasks. Each had an estimated administration time of approximately 40 minutes.

Initially 4 tasks were developed.

  • Bean Counter addressed recognition and use of number patterns. It required students to use number patterns to complete addition/ squares. The name came from the beans that were provided as part of the task package. Students used the beans to 'juggle' the numbers in the empty squares.
  • Come in Spinner, as its name implies, addressed aspects of the students' understanding of chance and variation. Students made predictions about likely outcomes and then made and tested spinners to see if they met the predictions.
  • Newspaper by the Metre addressed area conservation and estimation. Students made a square metre out of newspaper and then used this to design 'garden beds' and estimate areas.
  • Street Party was an original task addressing pattern and algebra concepts. Both direct and inverse relationships were targeted. Students developed patterns by building small tables into a 'long table' for a street party, and used the patterns to find the numbers of small tables needed to seat a given number of people.

The newspaper task was subsequently dropped as an assessment task because of practical difficulties, although it continues to be used as a teaching task. All tasks have been modified as a result of the INISSS experience, and new tasks have been developed.


Administration

Classroom teachers administered the tasks within their normal lessons and were asked to present the tasks as a normal teaching and learning activity, scaffolding the process through open-ended questioning, reading the questions to poor readers and allowing the use of any tools normally available in the classroom including calculators, except where restrictions were specified.

Marking was undertaken in a variety of ways. In some schools, teams of teachers met together to mark the work, in others, teachers marked their own class's work and, in one school, one teacher marked all the work. To facilitate marking, teachers were given a manual that had information about the targeted conceptual development, a detailed scoring rubric and examples of students' work exemplifying the rubric.

A feedback session following the first administration of the tasks allowed moderation of the marking and provided an opportunity for teachers to discuss the practical implications of this form of assessment. As a result, some modifications were made to the scoring rubrics to make them easier for teachers to use and be more explicit about the nature of the desirable response.


Scoring

Every item on each of the tasks had a scoring rubric that allowed students' responses to be coded. The rubric provided an analytic scale for each item, ranging from 0/1 for a right/wrong response to a 0-5 scale for a developmental sequence of responses on later tasks.


footprints

© Commonwealth of Australia 2020