Higher grades, better students? Or higher grades, lower standards? When more students achieve high exam grades, some claim the credit for supposedly better education systems. Others suggest that requirements must have been lowered. Behind these suspicions, there is usually a belief that somehow there is a natural ceiling to overall performance in education. This would be a mistaken view.
We can lift our sights in education, just as in other human activities. International comparisons make this clear. By showing how much better some countries do than others, they provide evidence that improvement is possible. It is a matter of raising expectations – and expectations matter for progress, both for individuals and countries. Ask a US parent or teacher why a student is doing badly in mathematics, and you are likely to hear that it is to do with intelligence. Ask the same question in Japan or Korea, and parents and teachers will generally blame the student for lack of effort. In some countries, parents blame the teachers.
International comparisons can help to raise expectations. The OECD’s Programme for International Student Assessment (PISA) of achievements by 15-year-olds in reading, mathematics and science has done a lot to open up this issue. PISA 2000 showed that even some of the best performing countries in the world have gaps between high and low performers and between students from socially advantaged and socially disadvantaged backgrounds. In the UK’s case, for instance, these gaps were much greater than in many other countries. Yet “high-quality” and “low equity” do not have to go hand in hand. Other countries like Finland, Canada, Japan and Korea showed that high-quality/high-equity results are possible. Ireland’s results were high-quality in reading, though only average in mathematics, with moderate equity, just better than the OECD average.
To date, countries’ reactions to PISA have varied considerably. Germany’s poor performance has provoked an intense debate, particularly about its streaming of students at age 11 into different types of schools. To understand its PISA results better, Germany commissioned a multilateral study among countries with which it wanted more detailed comparisons. Denmark initiated a review of its education policies in relation to those of Finland, a significantly higher performer.
Others have enhanced monitoring of their systems. Mexico has established a new evaluation institute, independent of its Department of Public Education. Canada now uses PISA for its monitoring of language, mathematics and science and applies it domestically to cover other subject areas. Some countries assess all students, rather than just a sample as provided for by PISA. That enables them to monitor both the system as a whole and individual schools, where many key decisions affecting students’ learning are made.
In parallel, countries are defining their expectations more clearly. Since PISA 2000, Germany has developed a national curriculum framework, with benchmarks for student performance. Spain, another relatively poor performer on average but one with relatively equitable outcomes, is seeking to improve its higher performers, by pursuing high quality, and accepting that this will probably be at the expense of equity in the first phase.
Finland, the highest achiever in PISA 2000, defines targets centrally, provides support and monitors schools but leaves to schools choices about how the targets are to be met. England, after developing centralised strategies for its initial successful efforts to improve performance in English language and mathematics, is now seeking to give more freedom to teachers and schools in determining the means to achieve improvements.
The US is giving special emphasis to low-performing groups. It has a larger percentage of high performers than many countries which outperform it, on average, in PISA. But it has a large percentage of low performers, too. Under new federal legislation, schools and states are required not only to produce overall improvements, but also improvements for currently disadvantaged ethnic minorities. So is the sky the limit, or is there a ceiling for quality? Comparisons between countries show that improvements can be made.
Resources matter, but they are not sufficient. Higher levels of expenditure per student are generally associated with higher levels of student achievement, but there is great variation in the efficiency of systems. Finland, Ireland and the UK, for example, spend less than France, Denmark, Switzerland or the US, but outperform all of them in PISA. In other words, classroom organisation, innovation, methods: these are all fundamental. Exactly which systems work best to achieve higher standards for all is a key policy question. Which is why we must improve our knowledge base about what works.
Education in its current form is concerned with the transmission of knowledge. But it is not yet a knowledge industry whose own practices are transformed by systematic examination of what works effectively. Other areas of professional practice are influenced by research. Education must be also. We are not without good research evidence or innovative teaching schools. What is needed is a more systematic approach on the ground, one that involves teachers in the research that will shape their practice.
This is clearly the case in Finland, where teachers have played a front-stage role in that country’s success. Teaching is a high-status profession there. Entry to teacher education is highly competitive, all graduate with master’s degrees, and they are given considerable freedom to innovate in their professional practice. The Finnish system has abolished streaming and grade repetition. Students in difficulty are not passed off to others.
All 30 OECD countries and more than 20 others are already using PISA to monitor performance. Results from 2003 will be available in December this year, and work for 2006 is already under way.
Building a complete picture of what works by obtaining improved results is a sure way to raise standards everywhere. It will take time, and it is a complex exercise. But without such evidence, we are vulnerable to impressions and prejudice, and these are not a clever basis for good policy-making.
OECD (2000), Measuring Student Knowledge and Skills: The Pisa 2000 Assessment of Reading, Mathematical and Scientific Literacy, Paris.
See also www.oecd.org/edu and www.oecdobserver.org/education.
©OECD Observer No 242, March 2004