What's your number?
In December each year the Higher School Certificate results are released and each year the media debates on-going issues about this credential. These issues include:
- Data issued by the NSW Education Standards Authority
- The focus on Band 6
- The value of the HSC
- Criticism of the HSC
- Australian Tertiary Assessment Rank (ATAR)
- The proliferation of early offers of university placement.
1. Data issued by the NSW Education Standards Authority
In the late 1990s, there was concern about how student achievement was being perceived and the data that was being published. The McGaw committee charged with reviewing the HSC wanted to reduce the media publication of league tables that ranked schools, change the perception of what was success and separate the tertiary entrance rank (TER) from the publication of HSC results.
Changes resulted in:
- I. The community perception of 50% being a pass and less than 50% a failure led to a rescaling of marks into six bands with Band 6 being the highest and 50% being the cut-off between Bands 1 and 2. This meant that most students who had worked hard for two years were not perceived as failures scoring below 50%.
- II. Increased efficiency allowing the HSC results to be sent to students prior to Christmas.
- III. Successful negotiation with the University Admissions Authority to have the then Tertiary Entrance Rank (to become ATAR) delivered to students some days after they received their HSC results.
- IV. A huge debate about how much data the assessment authority would release. Consistent with its desire to limit data, the names, and schools of students in Band 6 for each subject were released.
2. The focus on Band 6
The release of Band 6 data immediately resulted in the media creating league tables based on these results. The media expanded the use of this data to make year by year comparisons of schools, school systems, type of schools and achievement by gender, geographic differences, and postcode. Improving schools were praised and falling schools criticised based on the performance of their top students. Schools that had improved in other ways like a whole cohort or movements from one band to a higher band in one or more subjects are ignored as the focus falls on just the top group.
Government selective schools continue to be highly represented in this process, but other comprehensive schools receive little recognition because there are fewer bright students in these schools as more selective schools are established. While the government basks in the achievements of these selective schools, the community becomes more wary of the comprehensive schools as a destination for their children.
Band 6 results are helpful in proving information about the elite but do not provide any value-added information about schools or provide any comprehensive picture of their performance including under performance in selective schools.
3. The value of the HSC
Arguments about the value of the HSC have continued over the years. It remains an internationally recognised credential that has rigorous assessment and testing procedures. Some of these processes remain misunderstood.
I. It is not a standardised test.
The HSC is a standards-referenced test not a standardised test. What’s the difference?
A standardised test conforms to a normal distribution curve that is established from a large population. Its mean, mode and medium are the same and usually the application of marks has half the students below 50% and half above 50%. From this comes the underlying community perception of pass and failure. But it makes little sense to fail half a cohort based on a mathematical curve or tests designed to achieve this distribution.
Standards-referenced tests develop achievement standards that are based on the quality of students’ work. In the case of the HSC there are six standards or bands that are underpinned by samples of student work to illustrate each standard. When the HSC was first introduced groups of teachers sat around reading scripts and deciding which ones illustrated the description that was given to each band. They then reviewed the students’ scripts to determine those scripts that were at the margin between each Band. These scripts then become the determinates of cut-off points between bands for the rank order of students in that subject.
Each year these scripts representing the margin play a role in determining the cut-off points in rank order for each band from that years’ exam. There can be a different number and percentage of students within each band each year according to the quality of students’ work represented by these cut-off points. Because the HSC candidature is such a large one for most subjects, some normalisation prevails but smaller subject cohorts are even more reliant on the standards described by the band descriptors and work samples of students, especially at the band margins.
There will always be some students who do not reach the minimum standard of Band 2 resulting in a Band 1 that is given marks below 50%. For the remainder of a cohort their marks range from 50 to 100 based on the bands and standards represented by the work samples used to determine cut-offs points in the rank order.
As illustrated by some articles in newspapers and commentators, these matters are not well understood by many students, some teachers and many of the general public.
II. A modified school assessment
Half the HSC result is based on a school assessment program carried out over Year 12 involving a number of assessment tasks. Teachers calculate a rank order from the assessment tasks for each subject. The order cannot be changed and with large numbers, 20 or more, most schools allow the mark value of the assessments to be forwarded to the assessment authority without alteration. However, the actual marks can be changed, but not the ranking, to show differences between students. Teachers can either, spread a narrow range or compact a broad range because they do not reflect the differences between students due to the nature of the assessment tasks which may have resulted in compacted assessment marks or widely distributed marks.
Because these school assessments will be moderated according to how the cohort of students perform in the HSC examination, these gaps become important if some of the best students are not to be disadvantaged by poorer students winning more of the total cohort marks or, the reverse, that some poorer students are disadvantaged winning less of the cohort marks because the assessment gaps are too large and their assessment marks depressed.
This changing of marks, not rankings, remains both misunderstood in schools and underutilised.
III. Standards-referencing upholds standards from year to year.
The advantage of the HSC can be seen from the above discussion. Each year it provides a report against standards so that politicians, educational bureaucrats, and the general public have a measure about whether students are improving or not in each subject. Schools can learn whether their school has done better or worse against these standards by making the comparison about how many students fell into each of the bands each year. While the media focus might be on the released Band 6 data, schools have a much fuller set of data to work on to improve teaching and learning. Only the most ignorant schools cannot benefit from this feedback about their students’ performance.
4 Criticism of the HSC
The main criticism continues to be that some subjects favour students with retentive minds. Ignoring many of the qualities of a sound education including creativity and reasoning, subjects can rely heavily on students memorising facts and figures rather than a depth of understanding. It is a continuing process for the curriculum authority to monitor the subject syllabuses and examinations to ensure that rote learning is minimised, and understanding and application maximised and rewarded. These are matters for both syllabus writers and examiners to amend to convince the general public and teachers, in particular, that the HSC is the excellent credential it is purported to be.
5. Australian Tertiary Assessment Rank (ATAR)
If some don’t understand the HSC, then that number pales into insignificance compared to understanding the Australian Tertiary Assessment Rank. Both teachers and the public are wary of statistical processes that manipulate marks to come up with a single number used to rank all students in Australia seeking entry to university.
Before looking at the process let’s try and understand why this process is undertaken. Take the example of mathematics where there are multiple courses of varying difficulty. Would it be fair to take a mark of 80% in the lowest course and give it the same weighting as 80% in the highest course? No, it would not. And this applies to every course and subject. Simply adding the HSC subject marks together for each student to give the total of English plus the best 8 units will not be fair. Then what can be done.
The HSC marks must be moderated to take account of the quality of the candidature in each subject. Through a process of iterative scaling, involving the marks in all subjects undertaken by the candidates in a particular subject, the marks in every course are adjusted to fit the quality of the subject candidature. Put simply, that subject quality is determined by viewing the results of each student in all their other courses. Courses are scaled so that the mean and distribution of the marks obtained in the course are consistent with the mean and distribution of marks that the students taking that course obtain in all of their HSC subjects. (University Admissions Centre) When applied across all courses the new marks are then added up for each student to include two units of English and the other eight best units and this single mark arranged from top to bottom (ranked) gives the ATAR.
This ATAR then affects the university courses students can enter depending on the demand by students with higher ATARs.
One of the problems caused by this process has been that students perceive the more difficult courses as higher ATAR mark courses and therefore the ones they should study to achieve a high ATAR. This is partly a self-fulfilling prophesy because the brighter students take these courses and lift those subjects’ marks for their contribution to the ATAR. But for poorer students who join these courses, their marks remain low, and disappointment rules their HSC. Most students do best at subjects they like and are good at, and these have the potential to contribute most to their ATAR. Chasing marks by doing courses that may not be liked or that are too difficult, is a flawed approach to achieving the ATAR most wanted. All this is further complicated by the online versions of How to calculate your ATAR.
6. The proliferation of early offers of university placement.
In Time to end the arms race of early university offers (SMH 13/12/2022) Dallas McInerney CEO of Catholic Schools NSW outlines the evolution in equity of the HSC:
Some 1980s policy genius in the form of income-contingent loans for tuition costs (HECS) largely solved the issue of access by lowering barriers to entry. The related challenges of equity and excellence have been met with a history of university admission based on public examinations, common across all schools (the HSC) and recently coupled with school-based assessments, which are moderated to support fairness across the cohort.
McInerny and the SMH article Surge in early uni offers comes under fire from education experts, then go on to question the growing practice of university early offers of acceptance into university courses. Nearly 25,000 students have applied for early offers through the state’s admissions centre (UAC), and others applied directly to individual universities, meaning more than half of the school-leaver cohort could have an early offer of some form.
There might be some benefits, but they appear to be in favour of universities who get the planning and operational certainty and income projections. Benefits for students are less clear, particularly as unconditional or low-stake offers can come as early as April of year 12. Some schools report that students with early offers “check out” of their studies, lose motivation, or do not fully invest in final exams.
The issue at the heart of these concerns is equity. While some students get the benefit of early entry, others are denied. Christians should be concerned about such matters. Matters of justice is not an optional extra for a Christian world view.
McInerny applauds that the NSW government has commissioned a review of early offers, and makes some suggestions:
- early offers should be required to be conditional; a minimum academic requirement is perfectly reasonable.
- there should be a limit to just how early these early offers can be made (say, September).
- early offers should be managed centrally through UAC rather than directly with individual universities, thereby allowing regulators to monitor the effects of the various schemes.
Obviously, ATAR is an imperfect measure on its own, but there are already adjustment factors (formerly known as bonus points) as well as a host of scholarships (rural, ATSI, dux, financial hardship, etc.) designed to address its limitations.
The review by the NSW government will be eagerly awaited but the cat is out of the bag and unlikely to be put back. How much this approach of early entry can have boundaries put on it by the review is difficult to say. Some modification is possible but wholesale change – I doubt it. Too many vested interests and loss of privilege are at stake. But Christians should support approaches that provide for equity and hence justice.
John Gore