Studies in Empirical Economics
Studies in Empirical Economics Aman Ullah (Ed.) Semiparametric and Nonparametric Econometrics 1989. ISBN 978-3-7908-0418-8
Arno Riedl, Georg Winckler and Andreas Wörgötter (Eds.) Macroeconomic Policy Games 1995. ISBN 978-3-7908-0857-5
Walter Krämer (Ed.) Econometrics of Structural Change 1989. ISBN 978-3-7908-0432-4
Thomas Url and Andreas Wörgötter (Eds.) Econometrics of Short and Unreliable Time Series 1995. ISBN 978-3-7908-0879-7
Wolfgang Franz (Ed.) Hysteresis Effects in Economic Models 1990. ISBN 978-3-7908-0482-9 John Piggott and John Whalley (Eds.) Applied General Equilibrium 1991. ISBN 978-3-7908-0530-7 Baldev Raj and Badi H. Baltagi (Eds.) Panel Data Analysis 1992. ISBN 978-3-7908-0593-2 Josef Christl The Unemployment/Vacancy Curve 1992. ISBN 978-3-7908-0625-0 Jürgen Kaehler and Peter Kugler (Eds.) Econometric Analysis of Financial Markets 1994. ISBN 978-3-7908-0740-0 Klaus F. Zimmermann (Ed.) Output and Employment Fluctuations 1994. ISBN 978-3-7908-0754-7 Jean-Marie Dufour and Baldev Raj (Eds.) New Developments In Time Series Econometrics 1994. ISBN 978-3-7908-0766-0 John D. Hey (Ed.) Experimental Economics 1994. ISBN 978-3-7908-0810-0
Steven Durlauf, John F. Helliwell and Baldev Raj (Eds.) Long-Run Economic Growth 1996. ISBN 978-3-7908-0959-6 Daniel J. Slortje and Baldev Raj (Eds.) Income Inequality Poverty and Economic Welfare 1998. ISBN 978-3-7908-1136-0 Robin Boadway and Baldev Raj (Eds.) Advances in Public Economics 2000. ISBN 978-3-7908-1283-1 Bernd Fitzenberger, Roger Koenker and Jos é A. E. Machado (Eds.) Economic Applications of Quantile Regression 2002. ISBN 978-3-7908-1448-4 James D. Hamilton and Baldev Raj (Eds.) Advances in Markov-Switching Models 2002. ISBN 978-3-7908-1515-3 Badi H. Baltagi (Ed.) Panel Data 2004. ISBN 978-3-7908-0142-2 L uc Bauwens, Winfried Pohlmeier and David V eredas (Eds.) High Frequency Financial Econometrics 2008. ISBN 978-3-7908-1991-5
Christian Dustmann . Bernd Fitzenberger Stephen Machin (Eds.)
The Economics of Education and Training
Physica-Verlag A Springer Company
Editorial Board Heather M. Anderson Australian National University Canberra, Australia
Bernd Fitzenberger University of Freiburg Germany
Badi H. Baltagi Texas A & M University College Station Texas, USA
Robert M. Kunst Institute for Advanced Studies Vienna, Austria
Editors
Prof. Christian Dustmann University College London (UCL) Department of Economics Gower Street London WC1E 6BT United Kingdom
[email protected]
Prof. Stephen Machin University College London (UCL) Department of Economics Gower Street London WC1E 6BT United Kingdom
[email protected]
Professor Dr. Bernd Fitzenberger Albert-Ludwigs-University Freiburg Department of Economics Platz der Alten Synagoge 1 79085 Freiburg Germany
[email protected] 11 papers have been first published in “Empirical Economics” Vol. 32, No. 2-3, 2007 and 2 papers in Vol. 33, No. 2, 2007
ISBN 978-3-7908-2021-8
e-ISBN 978-3-7908-2022-5
Library of Congress Control Number: 2007942365 © Physica-Verlag Heidelberg 2008 This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, roadcasting, reproduction on microfilm or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Physica-Verlag. Violations are liable to prosecution under the German Copyright Law. The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. Cover Design: WMXDesign GmbH, Heidelberg, Germany Printed on acid-free paper 9 8 7 6 5 4 3 2 1 springer.com
Contents
Editorial: the economics of education and training . . . . . . . . . . . . . . . . . . . Christian Dustmann, Bernd Fitzenberger, and Stephen Machin Does reducing student support affect scholastic performance? Evidence from a Dutch reform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Michèle Belot, Erik Canton, and Dinnand Webbink Part-time work, school success and school leaving . . . . . . . . . . . . . . . . . . . . Christian Dustmann and Arthur van Soest Time to learn? The organizational structure of schools and student achievement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ozkan Eren and Daniel L. Millimet Who actually goes to university? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Oscar Marcenaro-Gutierrez, Fernando Galindo-Rueda, and Anna Vignoles Does the early bird catch the worm? Instrumental variable estimates of early educational effects of age of school entry in Germany . . . . . . . . . Patrick A. Puhani and Andrea M. Weber Peer effects in Austrian schools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Nicole Schneeweis and Rudolf Winter-Ebmer
1
7
23
47
79
105
133
Fair ranking of teachers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157 Hendrik Jürges and Kerstin Schneider School composition effects in Denmark: quantile regression evidence from PISA 2000 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 Beatrice Schindler Rangvid What accounts for international differences in student performance? A re-examination using PISA data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 Thomas Fuchs and Ludger Wößmann
vi
Contents
PISA: What makes the difference? Explaining the gap in test scores between Finland and Germany . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241 Andreas Ammermuller The impact of unionization on the incidence of and sources of payment for training in Canada . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267 David A. Green and Thomas Lemieux Evaluating multi-treatment programs: theory and evidence from the U.S. Job Training Partnership Act experiment . . . . . . . . . . . . . . . 293 Miana Plesca and Jeffrey Smith Employment effects of the provision of specific professional skills and techniques in Germany . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331 Bernd Fitzenberger and Stefan Speckesser
Christian Dustmann · Bernd Fitzenberger · Stephen Machin
Editorial: the economics of education and training
In recent years, there has been a big resurgence of interest in the economics of education and training. This has come from many quarters, including academic research, in policy circles and in general media debate. This collection of papers on various aspects of the economics of education and training reflects this increased interest. Education and training are keys to explain the current competitive strengths of national economies, and to secure future competitiveness. In an increasingly globalised world economy, national economies compete in the production of tradeable goods and, facilitated through modern technologies, a wide range of services. There are the first signs of specialisation on national level, for instance, Germany showing continued strength in manufacturing, while the UK is increasingly withdrawing from this sector and specialising in financial and other tradable services. In the past (and in many Continental European countries possibly until the first PISA study), educational and training institutions were often seen as providers of necessary skills for national economies, but this view has changed dramatically, with education and training now being seen as a key ingredient for international competitiveness, and with institutions that provide education being a main ingredient that help secure competitive positions. Over the last ten to fifteen years, there have been a number of major changes in the way we deal with our educational institutions. First, there is now a strong tendency to steer educational institutions into directions that provide cutting-edge knowledge and ability to new generations of workers, with the main objective now being to improving the position of the national economy, rather than ensuring equality. This is particularly so in Continental European countries, where past reforms were driven by considerations of equality in opportunity, and where the idea of provision of education according to individual ability is gaining strength. Second, there has been a new openness to reform agendas, which looks across national boundaries to improving national curricula by incorporating components that proved successful in other countries. This goes alongside efforts to ‘normalise’ national curricula by introducing comparable degrees. The Bologna agreement for higher education is an example. Third, education is increasingly seen as a good that can be more efficiently provided in a quasi-market setting. Competition between schools is no novelty in
2
C. Dustmann et al.
the Anglo-Saxon world and seems likely to increase in Continental European countries (for example, with there being increasing pressures to make achievement tables publicly available). At the high end of education, competition for funds and wellpaying students is an important ingredient for securing financing of universities and higher education institutions in the US, Australia, Canada, and the UK. Introduction of comparable degrees in Europe will soon add European countries to that list. And fourth, vocational training systems and government–sponsored training programs are being reformed in similar ways in many European countries that traditionally have seen these as a very important part of skill formation. Germany has recently introduced quasi-market mechanisms (vouchers, performance standards) into the provision of public-sector–sponsored training. The papers in this special issue come therefore at a time of heightened interest in the topic. They cover a large range of issues at the core of contemporary debate. These include financing of education, transition from school to work, organisation of education, school quality and issues that relate to it, such as quality of peers and teachers. They also address issues that relate to more vocational training activities, and how these are influenced by purpose-made programmes, or other institutions. The remainder of this editorial summarises the contributions to the special issue. We start with the papers on specific aspects of the educational systems in single countries. The first eight papers cover issues like impact of student support, time allocation, access to higher education, the impact of school entry age, peer effects and ranking of teachers. The next two papers involve cross-country studies of student achievement based on the PISA data. The remaining three papers are concerned with training the workforce. One paper studies the effects of unions on training. Two papers contribute to the literature on the evaluation of training programs. The study by Michele Belot, Erik Canton, and Dinand Webbink investigates the impact of student support on performance and time allocation of students in Dutch higher education. In 1996, the maximum duration of grants was reduced by one year, and thereby limited to the nominal duration of the study program. This reform could have had substantial financial consequences for students. The authors evaluate the effects of the reform using a difference-in-differences approach. The main findings are that after the reform, students early in their study (1) switch less to other programs, (2) obtain higher grades, and (3) do not spend more time studying or working. In addition, for students not older than 20 years when they started their study, larger effects are found for all performance variables (switching, percentage of completed courses, graduation in the first year and grade-point averages). These findings are consistent with recent evidence on heterogeneous treatment effects for higher ability students. Turning to a different aspect of time allocation and education, Christian Dustmann and Arthur van Soest analyse part-time employment of teenagers who are still in full-time education, their academic performance and their school-leaving decisions. The estimation strategy in the paper takes account of the possible interdependencies of these events and distinguishes between two alternative states to full-time education: entering the labour force full-time and going on to further training. The authors model this decision in a flexible way. The analysis is based on data from the UK National Child Development Study, which has an unusually rich set of variables on school and parental characteristics. The main finding is that working part-time while in full-time education has only small adverse effects on exam performance for females, and no effects for males. The effect of part-time work on the
Editorial: the economics of education and training
3
decision to stay on at school is also negative, but small, and marginally significant for males, but not for females. Other important determinants of exam success as well as the continuation decision are parental ambitions about the child’s future academic career. Utilizing parametric and nonparametric techniques, Daniel L. Millimet and Ozkan Eren also assess the impact of a heretofore relatively unexplored ‘input’ in the educational process, time allocation, on the distribution of academic achievement. The results indicate that school year length and the number and average duration of classes affect student achievement. However, the effects are not homogeneous – in terms of both direction and magnitude – across the distribution. It is found that test scores in the upper tail of the distribution benefit from a shorter school year, while a longer school year increases test scores in the lower tail. Furthermore, test scores in the lower quantiles increase when students have at least eight classes lasting 46– 50 min on average, while test scores in the upper quantiles increase when students have seven classes lasting 45 min or less or 51 min or more. The study by Fernando Galindo-Rueda, Oscar Marcenaro-Gutierrez, and Anna Vignoles is concerned with access to higher education (HE), a major policy issue in England and Wales. There is concern that children from lower socio-economic backgrounds are far less likely to get a degree. The authors analyse the changing association between socio-economic background and the likelihood of going to university, using data from the Youth Cohort Study, spanning the period 1994–2000. The study finds evidence of substantial social class inequality in HE participation but concludes that this is largely due to education inequalities that emerge earlier in the education system. Conditional on GCSE and A-level performance, no additional role is found for socio-economic background or parental education in determining pupils’ likelihood of going to university. Patrick A. Puhani and Andrea M. Weber estimate the effect of age of school entry on educational outcomes using two different data sets for Germany, sampling pupils at the end of primary school and in the middle of secondary school. Results are obtained based on instrumental variable estimation exploiting the exogenous variation in month of birth. The study finds robust and significant positive effects on educational outcomes for pupils who enter school at seven instead of six years of age: Test scores at the end of primary school increase by about 0.40 standard deviations and the probability to attend the highest secondary schooling track (Gymnasium) increases by about 12% points. The study by Rudolf Winter-Ebmer and Nicole Schneeweis deals with educational production in Austria and is focused on the impact of schoolmates on students’ academic outcomes. The authors use PISA 2000 and 2003 data to estimate peer effects for 15- and 16-year-old students. School fixed effects are employed to address the potential self-selection of students into schools and peer groups. The estimations show significant positive effects of the peer group on students’ reading achievement, and less so for mathematics. The peer effect in reading is larger for students from less-favourable social backgrounds. Furthermore, quantile regressions suggest peer effects in reading to be asymmetric in favour of low-ability students, meaning that students with lower skills benefit more from being exposed to clever peers, whereas those with higher skills do not seem to be affected much. Kerstin Schneider and Hendrik Jürges are concerned with a different aspect of educational production, namely rankings of teacher quality. Economic theory suggests that it is optimal to reward teachers, depending on the relative performance
4
C. Dustmann et al.
of their students. The authors develop an econometric approach, based on stochastic frontier analysis, to construct a fair ranking that accounts for the socio-economic background of students and schools and for the imprecision inherent in achievement data. Using German PIRLS (IGLU) data, the hierarchical structure of the data is exploited to estimate the efficiency of each teacher. A parsimonious set of variables suffices to get an estimate of the unobserved teacher quality. A Hausman–Taylor type estimator is the preferred estimator because teacher efficiency and some exogenous variables are correlated. In a study for Denmark, Beatrice Schindler Rangvid combines data from the first wave of the PISA study with register data to estimate the effect of the socio-economic mix of schools on students test scores. The administrative data allow us to add family background data for all same-aged schoolmates of the PISA students. To compensate for endogeneity in the school composition variable, the results are conditioned on a rich set of family and school variables from the PISA data. Quantile regression results suggest differential school composition effects across the conditional reading score distribution, with students in the lower quantiles achieving the largest test score gains. Mathematics results suggest that high- and low-ability students benefit equally from attending schools with a better student intake, and most results for science are only marginally significant. These results imply that mixing students of different home backgrounds could improve equity of achievement for both reading and mathematics; however, the average skill level would improve only for reading literacy. In mathematics, mixing students would not raise average outcomes, because the detrimental effect on students in the higher quantiles would offset positive effects on those in the lower quantiles. So far, we have discussed studies for single countries. Ludger Wößmann and Thomas Fuchs use the PISA student-level achievement database to estimate international education production functions. Student characteristics, family backgrounds, home inputs, resources, teachers and institutions are all significantly associated with math, science and reading achievement. Their models account for more than 85% of the between-country performance variation, with roughly 25% accruing to institutional variation. Student performance is higher with external exams and budget formulation, and also with school autonomy in textbook choice, hiring teachers and within-school budget allocations. Autonomy is more positively associated with performance in systems that have external exit exams. Students perform better in privately operated schools, but private funding is not decisive. In a second cross-country study, Andreas Ammermüller analyzes the large difference in the level and variance of student performance in the 2000 PISA study between Finland and Germany. To explain the better performance of Finnish students, the study estimates educational production functions for both countries, using a unique micro-level dataset with imputed data and added school type information. The difference in reading proficiency scores is assigned to different effects, using Blinder–Oaxaca and Juhn–Murphy–Pierce decomposition methods. The analysis shows that German students and schools have on average more favourable characteristics except for the lowest deciles, but experience much lower returns to these characteristics in terms of test scores than Finnish students. The role of school types remains ambiguous. Overall, the observable characteristics explain more of the variation in test scores in Germany than in Finland. Turning to the studies in this special issue on the economics of training, the study by Thomas Lemieux and David A. Green uses the Adult Education and Training
Editorial: the economics of education and training
5
Survey (AETS) to look at the effect of unions on the incidence and sources of payment for training in Canada. Simple tabulations indicate that union workers are more likely to engage in training activities than nonunion workers. The higher incidence of training among union workers is driven by the fact that they are more likely to take training courses offered by their employers than nonunion workers. This suggests that union workers are more likely to participate in training activities that enhance their firm-specific human capital. This union effect disappears, however, once the authors control for a variety of factors such as age, education, and in particular firm size and seniority. Everything else being equal, unions have little effect on the provision of training in Canada. Finally, the study present some limited evidence that unions help increase the participation of firms into the financing of training activities. Contributing to the literature on evaluating training programs, the study by Jeffrey Smith and Miana Plesca considers the evaluation of programs in the United States that offer multiple treatments to their participants. The theoretical discussion outlines the trade-offs associated with evaluating the program as a whole versus separately evaluating the various individual treatments. The empirical analysis considers the value of disaggregating multi-treatment programs using data from the US National Job Training Partnership Act Study. This study includes both experimental data, which serves as a benchmark, and non-experimental data. The JTPA experiment divides the program into three treatments ‘streams’ centered on different services. Unlike previous work that analyses the program as a whole, the streams are analyzed separately. Despite the relatively small sample sizes, the findings illustrate the potential for valuable insights into program operation and impact to get lost when aggregating treatments. In addition, it is shown that many of the lessons drawn from analyzing JTPA as a single treatment carry over to the individual treatment streams. In a study of evaluating training programs in Germany, Bernd Fitzenberger and Stefan Speckesser estimate the employment effects of the most important type of public-sector-sponsored training in Germany, namely the provision of specific professional skills and techniques (SPST). The analysis is based on unique administrative data, which have only recently become available. Using the inflows into unemployment for the year 1993, the empirical analysis uses local linear matching based on the estimated propensity score to estimate the average treatment effect on the treated of SPST programs by elapsed duration of unemployment. The empirical results show a negative lock-in effect for the period right after the beginning of the program and significantly positive treatment effects on employment rates of about 10% points and above a year after the beginning of the program. The general pattern of the estimated treatment effects is quite similar for the three time intervals of elapsed unemployment considered. The positive effects tend to persist almost completely until the end of the evaluation period. The positive effects are stronger in West Germany compared to East Germany. Most of this collection of papers is based on the conference ‘Education and Training: Markets and Institutions’, held at ZEW, Mannheim, in March 2005 and sponsored by the German Research Foundation (DFG) through the research network ‘Flexibility in heterogeneous labour markets’, see http://www.zew.de/dfgflex. Eleven papers are part of a special issue of Empirical Economics. Two papers appear as regular publications in Empirical Economics.
Does reducing student support affect scholastic performance? Evidence from a Dutch reform Michèle Belot · Erik Canton · Dinand Webbink
Accepted: 30 August 2006 / Published online: 30 September 2006 © Springer-Verlag 2006
Abstract This paper investigates the impact of student support on performance and time allocation of students in Dutch higher education. In 1996 the maximum duration of grants was reduced by 1 year, and thereby limited to the nominal duration of the study program. This reform could have had substantial financial consequences for students. We evaluate the effects of the reform using a difference-in-differences approach. Our main findings are that after the reform, students early in their study (i) switched less to other programs, (ii) obtained higher grades, while (iii) they did not spend more time studying or working. In addition, for students not older than 20 years when they started their study we find larger effects on all performance variables (switching, percentage of completed courses, graduation in the first year and grade point averages). These findings are consistent with recent evidence on heterogeneous treatment effects for higher ability students. Keywords Student support · Student behaviour · Policy evaluation JEL Classification I2 · J24 · J31
M. Belot University of Essex, Department of Economics and ISER, Wivenhoe Park, Colchester CO4 3SQ, UK E. Canton European Commission, Directorate-General for Enterprise and Industry, BREY 07/159, 1049 Brussels, Belgium D. Webbink (B) CPB Netherlands Bureau for Economic Policy Analysis, P.O. Box 80510, 2508 GM, The Hague, The Netherlands e-mail:
[email protected]
8
M. Belot et al.
1 Introduction The financing of higher education is a highly debated topic in many countries. Public contributions to education, and to higher education in particular, are usually justified on the grounds of equal access for all students, absence of appropriate capital markets, or external effects of education benefiting society at large. The debate on increases in private contributions to higher education usually focuses on the impact on enrolment decisions, especially enrolment of students from low income families. A large literature studies this issue [see for instance Dynarski 2003; Kane 1995; van der Klaauw 2002]. Much less is known about the effects of public support on the performance and behaviour of students. Do higher private contributions stimulate student effort and performance or do they increase working on the side? This paper evaluates the effect of a major reform in the system of student support on the performance and time allocation of students in Dutch higher education. The reform, which was introduced in 1996, reduced the maximal duration of grants with 1 year and limited it to the nominal duration of studies. This reform applied uniformly to all students enrolling for the first time in Dutch higher education. To identify the effect of the reform we use a difference-indifferences (DD) approach. The first difference is along the time dimension: before versus after the implementation of the reform. We analyse data of freshmen enrolling a year before the reform (1995) and of freshmen enrolling a year after the reform (1997). For the second difference we exploit a specific feature of the reform. Dutch higher education consists of higher professional education and university education. The nominal duration of both types of higher education is 4 years. The actual duration, however, differs substantially. Before the reform students in higher professional education needed on average 54 months to complete their studies, compared to 66 months for university students (Statistics Netherlands). The reform reduced the duration of student support from 60 to 48 months. Hence, the price of higher education increased on average with 6 months of student support for students in higher professional education and with 12 months of student support for university students. The second difference is based on this difference in treatment between students in the two types of higher education. We therefore use students in higher professional education as a control group, and compare how performance and time allocation changed after the reform in those two groups. Our paper is related to a small literature that studies the impact of financial measures on student performance. Bettinger (2004) investigates the effect of means-tested financial assistance, the Pell Grant program, on student retention in US post-secondary education, conditional on initial enrolment. He uses data on college enrolments in Ohio’s public 2- and 4-year colleges and exploits both the panel and cross-sectional variation to identify the causal effect of Pell grants. The results provide some evidence that Pell grants reduce students’ drop-out behaviour, especially those using the panel variation. These results show that a $1,000 increase in a student’s Pell grant corresponds to a 9.2%-point decrease in the likelihood that students withdraw. Dynarski (2005) evaluates the effect
Does reducing student support affect scholastic performance?
9
of merit aid grants on college completion, which were implemented on a large scale in a dozen of states in the US in the 1990s. Merit aid grants are based on previous schooling performance. Using micro data from 2000, Dynarski compares cohorts that have been exposed to the merit aid program to those who were not. She finds that the aid programs increase the share of the population that completes a college degree by 3% points. Cornwell et al. (2003) study the transformation from need-based to merit-based funding of higher education in Georgia (US). The new funding program would attribute grants to academically proficient students, evaluated by their grade point average per term. However, the program had no requirements in terms of study load, so Cornwell et al. find that many students took fewer classes per term in order to qualify for the meritbased grant. Leuven et al. (2003) study the effects of financial rewards on the performance of first year economics and business students at the University of Amsterdam in a randomised field experiment. They find non-significant average effects of financial incentives on the pass rate and the number of collected credit points. However, they find evidence for heterogeneous treatment effects. High ability students have higher pass rates and collect significantly more credit points when assigned to the high reward groups. Low ability students collect less credit points when assigned to higher reward groups. Angrist and Lavy (2002) report on a policy initiative in Israel aimed at increasing the matriculation rates of low-achieving students in secondary school by offering financial rewards. They find a significant positive effect of rewards on achievement. Kremer et al. (2004) analyse the effects of financial rewards on achievement for primary school girls in two districts in rural Kenya by means of a randomised experiment. They report large positive effects on both achievement and school attendance in one district, whereas there appears to be no such effects in the other district (where there were problems with program implementation). Our paper also studies the relationship between public support and the decision to work on the side. In the literature there are several studies that focus on the effect of working on the side on study performance (see for instance Stinebrickner and Stinebrickner 2003) but, to our knowledge, there are no studies that look at the relationship between public support and time allocation decisions. We find that the reform of 1996 improved scholastic performance in the first years of higher education but did not have an impact on the time allocation of students. The DD-estimates show that after the reform students switched about 5% less to other programs, while student drop-out remained unaffected. In addition, students obtained higher grades (about 0.13 points on a ten point scale). If we restrict the sample to the youngest students, not older than 20 years when they started their study, we find larger and significant effects on all performance variables (switching, percentage of completed courses, graduation in the first year and grade point averages). Moreover, the effects increase when we move on to younger samples (not older than 19 or 18 years). These findings are consistent with recent evidence on heterogeneous treatment effects for higher ability students. A concern for the identification of the effect of the reform is the reallocation of students between higher professional education and university. We find that
10
M. Belot et al.
the change in observable characteristics of students reduces the difference in performance after the reform. If the change in observable characteristics is a guide for the change in unobservables the DD-estimate is downward biased by selection bias. In that case we can interpret the estimated effects as lower bounds. The paper is organised as follows. Section 2 describes the Dutch higher education sector and the reform of 1996. The empirical strategy is outlined in Sect. 3. In Sect. 4 we describe the data, and present the empirical results. Section 5 concludes and discusses the findings. 2 Student support and higher education in the Netherlands Dutch higher education consists of two different levels of education: higher professional education and academic education at universities. Programs in higher professional education prepare students to practise a profession and to enable them “to function self-consciously at large”. Universities prepare students for independent scientific work in an academic or professional setting. There are 14 universities (including the Open University) and about 40 institutes of higher professional education. Nearly all programs have a nominal duration of four years. In 1986, the Dutch government introduced the Student Finance Act for students enrolled in higher education. This Act regulates the allocation of public grants to students, which take the form of monthly financial transfers. Next to living expenses and direct costs (books, etc.), students pay a fixed tuition fee at the beginning of each academic year. These fees are uniform across all subjects and are substantially below the total cost of the programs. The difference between the total cost of the programs and the level of the tuition fees is borne by the government through direct payments to the providers of education. There are four categories of support: the basic grant, the supplementary grant, the loan and the “in-kind” support. The basic grant is the most widespread form of support and depends on the living situation (i.e. students living with their parents or away from home). The supplementary grant depends on parental income and characteristics of the family (means-tested). The third form of support is the loan. It can be either a debt of students who did not meet the performance requirements or an additional source of funds. In both cases students must reimburse their debt after their studies (with or without a degree) and within a limited time period. The last category of support is a travel pass, entitling students to free public transport (during weekends or weekdays). The rules of student support have changed several times over the last 15 years.1 Two types of changes deserve particular attention. First, the government tightened the performance requirements attached to the grants (basic and supplementary). Second, the duration of support has been cut several times. The most noticeable change from the student’s point of view is the reform of 1996, 1 Belot et al. (2004) review the main changes.
Does reducing student support affect scholastic performance?
11
which introduced the so-called performance grant. The reform made all grants conditional on student performance. After the reform students received loans, which could be converted into gifts, upon satisfactory performance. In addition, the reform reduced the duration of student support from 5 to 4 years. As most of the students need more than 4 years to finish their studies, this increased the costs of higher education. A reduction of 1 year of student support increases the cost between a minimum of 684 euro for students living with their parents and only receiving a basic grant, to 4,385 euro for students living away from home, enrolled in a university program and eligible for the maximum amount of supplementary grant. The difference in financial means between students starting in 1995 and those starting in 1996 is even larger, since the level of grants has been reduced at the same time. It should also be noted that these increases in costs took place in a context of relatively low private contributions to higher education. Compared to the US and the UK private contributions to higher education in the Netherlands are low. Because of the reform, studies with a large difference between actual and nominal duration became relatively more expensive. We exploit this feature of the reform in a difference-in-differences approach (see next section). 3 Empirical strategy The amount of public support that a student receives is regulated by the Student Finance Act (cf. Sect. 2). Most of the variation in public support between students depends on parental income and characteristics of the family (meanstested). The relation between parental income and the amount of public support is approximately linear below a certain income level. As a result, the variation between students with almost the same parental income is quite small. In this paper we use another source of variation in public support: the reform of 1996. Students who enrolled before 1996 were entitled to 5 years of public support for their studies. For students who enrolled after 1996 this was reduced to 4 years. This could induce a substantial increase in the cost of higher education, up to an amount of 4,385 euro (see Sect. 2). In this paper we exploit the variation in public support induced by the reform of 1996 by comparing the behaviour (i.e. performance and time allocation) of students who enrolled in 1995 with the behaviour of students who enrolled in 1997. We start the analysis by estimating the following type of relation: Yi = α + βXi + δYeari + εi ,
(1)
where Yi is an outcome for individual i (in terms of behaviour), Xi is a vector of individual characteristics, which controls for differences between the student populations of 1995 and 1997, and Year is a time dummy corresponding to the year of enrolment (1995 = 0, 1997 = 1). εi is the error term. The coefficient δ measures the difference in behaviour of students between 1995 and 1997. This estimator is, however, confounded to the extent that it also captures the
12
M. Belot et al.
effect of other changes that had different impacts on the behaviour of students before and after the reform. To correct for that, we adopt a difference-in-differences approach. We contrast the first difference with the difference between the behaviour of students before and after the reform of a suitable control group. As a control group we use students from higher professional education, the second type of Dutch higher education. We expect that the reduction of student support had much less effect on students in higher professional education than on students in university education. The actual duration of higher professional education is substantially shorter whereas the nominal duration does not differ. Before the reform, students in higher professional education needed on average 54 months to complete their studies, compared to 66 months for university students (Statistics Netherlands). The reform reduced the duration of student support from 60 to 48 months. Hence, the price of higher education increased on average with 6 months of student support for students in higher professional education and with 12 months of student support for university students. We therefore adopt a difference-in-differences approach in which we take as a control students from higher professional education. The model we estimate in this approach has the following form: Yi = α + βXi + δYeari + λUnii + γ Yeari × Unii + εi ,
(2)
where δ measures the overall effect of time (possibly also capturing other changes than the reform in student support), Uni is a dummy for attending a university program or not. The difference-in-differences coefficient γ estimates the impact of the reform. A concern in our approach is that the reform might also have affected enrolment decisions. Indeed, the change in costs could have discouraged some students to enrol in higher education, while other students might have reallocated from university studies to higher professional education. We will discuss the implications of a possible reallocation of students on our estimates in the final section.
4 Data and empirical results 4.1 Data We use data collected by the SEO — SCO Kohnstamm Institute of the University of Amsterdam. These data come from surveys among freshmen of the academic year 1995/1996 and 1997/1998, selected randomly from a general file including all students enrolled in higher education. Questionnaires were sent at two different points in time: one right after the beginning of the first academic year, and the second one roughly one and a half year later. The sample includes 8,726 observations, 4,412 from the cohort of 1995 and 4,314 from the cohort of 1997. Stratifications in the sample are made according to type of higher education (university versus higher professional education). Further
13
Does reducing student support affect scholastic performance? Table 1 Summary statistics of dependent and explanatory variables
Performance Position at second survey Same study (%) Changed study (%) No higher education (%) First-year performance Passed first-year exam (%) Percentage of completed courses Grade point average Time allocation Total hours spent on education Job on the side (%) Hours worked job on the side Background Age (average in years) Female (%) Ethnicity Dutch (%) Parental income (1–13) Education mother (1–12) Education father (1–12) Ability Repeated class in previous education (%) Pre-university education certificate (%) Grade point average exam sec. education
Higher professional
University
1995
1997
1995
1997
80.9 13.4 5.6
83.8 12.7 3.5
80.1 17.8 2.2
87.2 11.3 1.4
64.4 63.1 61.5 62.4 85.6 88.6 81.6 86.7 6.71 (0.79) 6.81 (0.78) 6.74 (0.95) 6.92 (0.88) 35.3 (12.3) 54.6 5.1 (5.7)
33.3 (12.4) 58.6 7.1 (6.5)
33.0 (11.5) 41.3 3.6 (4.8)
30.5 (11.1) 54.8 5.5 (6.5)
18.8 46.8 93.1 4.7 (3.4) 5.7 (2.9) 6.8 (3.3)
18.9 46.9 92.1 5.6 (3.9) 6.1 (2.9) 7.1 (3.3)
18.5 56.6 95.5 5.6 (3.2) 7.3 (3.2) 8.8 (3.3)
18.5 53.3 93.1 6.5 (3.5) 7.7 (3.1) 9.0 (3.3)
41.7 38.5 21.3 19.4 17.2 17.7 98.6 98.3 6.70 (0.55) 6.73 (0.54) 7.02 (0.70) 7.03 (0.69)
Standard deviations are in parentheses
stratifications are made according to the study field (based on the so-called HOOP-classification into eight categories used in the Netherlands).2 Table 1 shows summary statistics for both cohorts and in both types of education. We limit the sample to students not older than 30 years enrolling for the first time in full time higher education. The top panel shows the statistics for the dependent variables, the bottom panel shows statistics for the explanatory variables. In our data we have four measures of scholastic performance: (1) educational position at the time of the second survey, (2) percentage of completed courses in the first year, (3) passing the first-year exam, (4) grade point average in the first year. The sample statistics show that after the reform students were more 2 For both cohorts, the response rate in the second questionnaire was substantially lower than in the first questionnaire (respectively 33 and 39% less students answered the second questionnaire). We tested whether the probability of not responding to the second questionnaire was significantly different in 1995 and 1997, introducing a time dummy as dependent variable and controlling for additional individual characteristics. We find no significant difference in attrition between 1995 and 1997. Moreover, most coefficients are insignificantly different from zero, including the ability variables. Therefore, bias due to attrition is likely to be limited (results can be found in Belot et al. 2004).
14
M. Belot et al.
likely to pursue their studies in the second year, completed more courses and improved on the grade point averages in the first year. This holds especially for university students. For both types of higher education we observe a decrease of the hours spent on education and an increase of working hours in jobs on the side. In the analysis we control for three groups of variables. First, we correct for the study field as the sampling procedure was slightly different in 1997 and 1995. Study field is a classification of the discipline into eight categories. Second, we control for ability measured by: the grade point average of the final exam at secondary school; type of secondary education; class retention. Third, we control for personal and social background variables. In particular, we include age, gender, ethnic background, parental income, parental education. The summary statistics show that after the reform the social background of students is higher in both types of higher education. For the ability variables we observe only small changes. 4.2 Empirical results We now estimate the effects of the reform on the behaviour of students in terms of scholastic performance and effort (time allocation). For all dependent variables we estimate models like Eqs. (1) and (2), using various combinations of control variables. After presenting the empirical results we will discuss the implications of the possible reallocation of students due to the reform. We start with the estimates for all freshmen of the academic years 1995/1996 and 1997/1998. 4.2.1 Student performance Table 2 shows the estimation results for the four measures of scholastic performance. Columns (1) and (2) show the estimates of the before-after comparison [Eq. (1)] for university and higher professional education. Column (3), (4), and (5) show the DD-estimates for three specifications. Column (3) only controls for study field, column (4) also includes ability, and column (5) includes ability and background. We consider the impact of the reform on switching behaviour and drop-out with a multinomial logit-model with three options: continue the program, switch to another program, or drop-out from higher education. Column (1) shows that after the reform university students switch about 6% less to other studies. This result is statistically significant at the 1% level. Students in higher professional education did not change their switching behaviour after the reform. The DDestimates confirm the results. University students have approximately 5%-point lower chance of switching after the reform, and this outcome is robust for the inclusion of additional control variables. For the drop-out option we find no changes. We estimate the effect on the percentages of completed courses with OLS. The before–after comparison shows that both students in university and in higher professional education passed more courses after the reform. The
15
Does reducing student support affect scholastic performance?
Table 2 Treatment effect on student performance (multinomial logit / OLS / probit estimates, marginal effects)
Switch Drop-out Observations Completed courses (%) Observations Passed first-year exam Observations Grade point average Observations Controls Study field Ability Background
Before–after estimates
Difference-in-differences estimates
University
Higher prof.
DD1
DD2
DD3
(1)
(2)
(3)
(4)
(5)
−0.059∗∗∗ (0.015) −0.000 (0.001) 2,084 4.171∗∗∗ (1.040) 2,066 0.037 (0.025) 2,084 0.136∗∗∗ (0.036) 2,034
−0.008 (0.017) −0.013 (0.006) 1,687 1.990∗ (1.111) 1,653 0.028 (0.027) 1,686 −0.024 (0.038) 1,596
−0.047∗∗ (0.020) 0.004 (0.011) 3,771 1.987 (1.589) 3,719 −0.008 (0.034) 3,771 0.129∗∗ (0.059) 3,630
−0.048∗∗ (0.019) 0.003 (0.010) 3,771 2.270 (1.489) 3,719 0.010 (0.035) 3,771 0.139∗∗∗ (0.051) 3,630
−0.048∗∗ (0.019) 0.002 (0.001) 3,771 2.155 (1.493) 3,719 0.010 (0.035) 3,771 0.136∗∗∗ (0.051) 3,630
Yes Yes Yes
Yes Yes Yes
Yes No No
Yes Yes No
Yes Yes Yes
Standard errors are in parentheses. The ability controls include grade point average at high school, class retention, type of high school diploma. “Background” includes gender, age, age squared, parental income, and parental education *** Significant at 1%-level, ** significant at 5%-level, * significant at 10%-level
DD-estimates indicate that university students improved more than students from higher professional education but the standard errors are (slightly) too large for the effects to be significant. The probit estimates for passing the first-year exam also indicate an improvement after the reform for both types of higher education. However, none of the DD-estimates is significant. The OLS-estimates for the grade point average show a clear picture. The grade point averages for university students increased significantly after the reform, whereas those of students of higher professional education did not change. All the DD-estimates are significant. 4.2.2 Allocation of time It is not clear how students will react to the reform. They could invest more time in studying in order to avoid a long and more costly duration of the study. Another option is that students tried to compensate the loss in public support by working more often and more hours on the side. This could especially be true if students have some aversion to borrow from the government. Table 3 shows the estimation results for three dependent variables: weekly study hours, jobs on the side and working hours.
16
M. Belot et al.
Table 3 Treatment effect on time allocation (OLS / probit estimates, marginal effects)
Education hours Observations Working Observations Hours work Observations Controls Study field Ability Background
Before-after estimates
Difference-in-differences estimates
University
Higher prof.
DD1
DD2
DD3
−2.598∗∗∗ (0.389) 3,178 0.147∗∗∗ (0.019) 3,201 1.865∗∗∗ (0.201) 3,182
−1.711∗∗∗ (0.481) 2,875 0.099∗∗∗ (0.019) 2,919 1.844∗∗∗ (0.239) 2,880
−0.873 (0.599) 6,053 0.029 (0.026) 6,126 −0.064 (0.303) 6,062
−0.863 (0.598) 6,053 0.025 (0.026) 6,126 −0.135 (0.302) 6,062
−0.790 (0.598) 6,053 0.036 (0.027) 6,126 0.015 (0.301) 6,062
Yes Yes Yes
Yes Yes Yes
Yes No No
Yes Yes No
Yes Yes Yes
Standard errors are in parentheses. The ability controls includes grade point average at high school, class retention, type of high school diploma. “Background” includes gender, age, age squared, and parental income
The estimates in Table 3 indicate that students did not choose for the first option of putting more effort in studying. The before–after comparison shows that students in both types of higher education report to spend less time on education. However, the DD-estimates are not significant. The same holds for the other two dependent variables. We find that both students from university and from higher professional education work more often and more hours in jobs on the side, but the DD-estimates are not significant. The increase in working on the side for both types of higher education students might be related to the macro economic conditions. During the second half of the 1990s the Dutch economy boomed and the labour market became very tight, offering many parttime working opportunities, especially for highly educated young people. The unemployment rate decreased from 7.0% in 1995, to 5.5% in 1997 and further decreased to 2.6% in 2000.
4.2.3 Young students In the previous analysis we included all freshman not older than 30 years. This includes students without any delay on their way through secondary education to higher education and students who had some delay or did not directly enrol after graduating from secondary education. In this section we restrict the sample to the youngest group: students not older than 20, 19, or 18 years when they started their study. We have three reasons to focus on this group. First, we expect that this group is more homogenous and more often followed the so-called ‘royal’ way to higher education, which is directly from pre-university education to university education and directly from higher general secondary
17
Does reducing student support affect scholastic performance? Table 4 DD-effect on student performance for young students (marginal effects) Age (years)
≤20
≤19
≤18
Switch
−0.051∗∗ (0.020) 3,546 3.201∗∗ (1.536) 3,500 0.031 (0.036) 3,545 0.171∗∗∗ (0.053) 3,414
−0.058∗∗ (0.021) 3,186 4.329∗∗∗ (1.585) 3,146 0.074∗∗ (0.037) 3,185 0.185∗∗∗ (0.055) 3,067
−0.057∗∗ (0.024) 2,266 5.641∗∗∗ (1.786) 2,234 0.088∗∗ (0.042) 2,266 0.241∗∗∗ (0.065) 2,176
Observations Percentage of completed courses Observations Passed first-year exam Observations Grade point average Observations
Standard errors are in parentheses. All controls are included
education to higher professional education. Second, the average ability of this group will presumably be higher. We find that the grade point average of the final exam decreases with age (7.0 for the 20 year university sample, 7.1 for the 19 year university sample, 7.2 for the 18 year university sample). Several recent studies find evidence for heterogeneous treatment effects (Leuven et al. 2004; Angrist and Lavy 2002). Higher ability students are found to be more sensitive for financial incentives. For students at the higher end of the skill distribution an increase in performance might be feasible more often, and come at lower costs in terms of effort. Third, the financial impact of the reform might have been larger for the youngest university students. The actual duration of the study might be smaller for students with above average ability. If, for instance, these students would on average need three months less for finishing a study in higher professional education (51 instead of 54 months) and in university (63 instead of 66 months), the reform increased the relative costs of university education with 9 months instead of 6 months for the total group. These arguments suggest that the estimated effects of the reform should be larger for the group of young students and increase with the ability of the samples. Table 4 shows the results for the difference-in-differences estimations including all available controls (i.e. the DD3 model) when the group is restricted to students not older than 20, 19, and 18 years respectively. The estimates for the performance variables show a clear picture. The scholastic performance of young students increased on all indicators. All DD-estimates are larger than those in Table 3 and statistically significant with one exception. In addition, most effects increase when we move on to younger samples. These findings are consistent with the evidence on heterogeneous treatment effects for higher ability students. We also estimated DD-models for the three time allocation variables. The findings (not shown in Table 4) are quite comparable with those in Table 3. The allocation of time did not change for the youngest students.
18
M. Belot et al.
4.2.4 Selection on observables and unobservables A crucial question is whether one can attribute the effects measured by the difference-in-differences estimates to the reform. How can we be sure that the increase in performance is not simply the result of low ability students selecting into higher professional education? In the previous sections we showed that the estimates are robust for various sets of controls. In addition, the point estimates in the models with controls are larger than in the models without controls. This suggests that after the reform students selected into types of higher education in which their performance on average is lower. However, selection on unobservable characteristics may bias the DD-estimates. In the absence of experimental data or instrumental variables, that affect the choice of the type of higher education but not the performance in higher education, it is difficult to address this problem. In a recent paper on the effectiveness of Catholic schools Altonji et al. (2005) propose to use information on the observable characteristics as a guide to the degree of selection on the unobservable characteristics. They measure the amount of selection on an index of observables in the outcome equation and then calculate how large selection on unobservables would need to be in order to attribute the entire effect (of Catholic school attendance) to selection bias. We follow this approach to assess the type of selection problem we may be facing. The impact of the selection on observables on the change in performance can be calculated by rearranging Eq. (2) into:
H U U H U U H H − X95 ) − (X97 − X95 )} + γ , y¯ˆ 97 − yˆ¯ 95 − y¯ˆ 97 − y¯ˆ 95 = β{(X97
(3)
where y¯ˆ t is the average predicted value of a performance variable (y) for group j (j = {University, Higher professional}) in year t. The left side of Eq. (3) shows the total predicted change in a performance variable, the right side shows the contribution of the change in observables and the DD-estimator. Following Altonji et al. (2005), we may use the change in observables as a guide for the change in unobservables. If the change in observable characteristics of the population explains relatively little of the total predicted change, then it is unlikely that unobservable characteristics play a major role in the total predicted change. In Table 5 we calculated the three components of Eq. (3) based on the estimates for students not older than 18 years in Table 4. Column (1) in Table 5 shows the total predicted change, column (2) shows the DD-estimate from Table 4 and column (3) shows the contribution of the change in observable characteristics of the population to the overall predicted change. Two points should be noted. First, the contribution of the change in observables is quite small compared to the DD-estimator. Second, the changes in the observable characteristics of the different groups are such that university students should have performed worse on average after the reform. In other words, if we would not control for selection on observables we would underestimate the effect of the reform. This result may be counterintuitive, as one would expect that students self-select in such a way that only the most able students j
19
Does reducing student support affect scholastic performance? Table 5 Decomposition of the predicted change in performance Performance variables
Completed courses (%) Passed first-year exam (%) Grade point average
Total predicted change
DD estimate [table 4 column (3)]
Change observable characteristics
(1)
(2)
(3)
4.80 7.30 0.15
5.64 8.80 0.24
−0.84 −1.50 −0.09
enrolled in university. However, this might not have happened because of specific features of the reform. The reform changed the relative prices of university and higher professional education only if the expected duration of studies at university or in higher professional education is less than a year more than the nominal duration. In Belot et al. (2004) we argue that this especially applies to students around and above the mean ability level and less to low ability students who enrolled in university before the reform. The main finding in Table 5 is that the population of university students has lower observed scholastic aptitudes after the reform. Following Altonji et al., we expect that their unobservable characteristics have most likely followed the same trend. Therefore it seems not very plausible that the estimated positive effect from the reform on performance is driven by favourable changes in the unobservable characteristics. We could even argue that our estimates are a lower bound on the true effects of the reform on student performance.
5 Conclusions and discussion This paper investigates the effects of public support on student performance and time allocation in Dutch higher education. In 1996 a major reform was implemented in the Dutch system of student support. We exploit this reform as a source of exogenous variation in order to identify the effect of student support on performance and time allocation, using survey data on freshmen enrolled in 1995 and 1997. We find the following effects from the reform. First, students switched about 5% less to other programs, while student drop-out is basically unaffected. Second, scholastic performance in the first year improved. Students obtained higher grades (about 0.13 points on a ten point scale). The impact on student performance is larger if we only consider young students. We find that students not older than 20 years (when they started their study) improved on all performance indicators: switching, passing the first year exam, percentage of completed courses, and grade point average. In addition, we find that the improvement is larger if we move on to younger students. These findings are consistent with recent studies providing evidence for heterogeneous treatment effects. Third, the time allocation of students (hours spent on study and work, and incidence of jobs on the side) remains unchanged. This is remarkable
20
M. Belot et al.
given the fact that scholastic performance increased. Several recent studies (Leuven et al. 2004; Kremer et al. 2004) also find increases in performance and no significant changes in self-reported effort. In these studies it is argued that measurement error in the self reported effort variables might explain this finding. A crucial question is whether the changes between 1995 and 1997 are the result of the reform or the result of unobserved factors. In the absence of experimental data or instrumental variables it is difficult to address this question. Altonji et al. (2005) propose to use information on the observable characteristics as a guide to the degree of selection on the unobservable characteristics. Following this approach we find that the contribution of the change in observables is quite small compared to the DD-estimator, and the change in observables increases the difference in performance. If the change in unobservables follows the trend of the change in observables this would lead to a downward bias of the DD-estimate. Hence, we would underestimate the effect of the reform. Another issue is the long-term effects of the reform. Our estimates only relate to the effects for students in the first 2 years of their studies. Unfortunately, we do not have data on the performance of these students in later stages of their studies. In addition, to our knowledge there are no other Dutch micro data available that could shed light on this issue. The only available data are population statistics. These statistics show that the graduation rate for the university cohort of 1995 after 5, 6, and 7 years is respectively 26, 42, and 54%. For the cohort of 1997 this is 26, 43, and 55%. The figures for higher professional education are: 58, 65, and 68% for the 1995 cohort and 58, 65, and 69% for the 1997 cohort. The average study duration of university graduates did not change (66 months for both cohorts) and decreased from 54 to 52 month for graduates of higher professional education. These statistics show slight improvements of performance in higher education but no clear improvement in the difference-indifferences approach, i.e. when using higher professional education as a control. Hence, it is unclear whether the early gains of the reform carry on in later stages of higher education. The changes in study duration are in line with the change in observables we find. It should be noted that these averages for study duration are based on students who graduated and exclude students who did not continue in this study. The population statistics do not provide information on changes in the quality of the performance, for instance measured by grade point averages. We conclude that the reform of 1996 is the most credible candidate for the changes in student performance, but we cannot completely rule out the impact of other factors. Acknowledgements We would like to thank the guest editor Stephen Machin, two anonymous referees, Joop Hartog, Pierre Koning, Hessel Oosterbeek, and seminar participants at the 2004 conference of the European Association of Labour Economists, CPB, Institute for Advanced Studies in Vienna, NAKE research day 2004 and ROA University Maastricht for useful comments. The views expressed here are our own and should not be attributed to the European Commission.
Does reducing student support affect scholastic performance?
21
References Altonji JG, Elder TE, Taber CR (2005) Selection on observed and unobserved variables: assessing the effectiveness of Catholic schools. J Polit Econ 113:151–184 Angrist JD, Lavy V (2002) The effect of high school matriculation awards: evidence from randomized trials. NBER Working Paper, no. 9389 Avery C, Hoxby C (2004) Do and should financial aid packages affect students’ college choices? In: Hoxby C (ed) College decisions: the new economics of choosing, attending and completing college. University of Chicago Press, Chicago Belot M, Canton E, Webbink D (2004) Does reducing student support affect educational choices and performance? Evidence from a Dutch reform. CPB Discussion Paper 35 Bettinger E (2004) How financial aid affects persistence. NBER Working Paper, no. 10242 Cornwell CM, Lee KH, Mustard DB (2003) The effects of merit-based financial aid on course enrollment, withdrawal and completion in college. Mimeo, University of Georgia Dynarski S (2003) Does aid matter? Measuring the effect of student aid on college attendance and completion. Am Econ Rev 93:279–288 Dynarski S (2005) Finishing college: the role of state policy in degree attainment. Mimeo, Harvard University, Kennedy School of Government Kane TJ (1995) Rising public college tuition and college entry: how well do public subsidies promote access to college? NBER Working Paper, no. 5164 Kremer M, Miguel E, Thornton R (2004) Incentives to learn. Unpublished working paper, Harvard University Leuven E, Oosterbeek H, van der Klaauw B (2003) The effect of financial rewards on students’ achievements: evidence from a randomized experiment. CEPR Discussion Paper, no. 3921 Stinebrickner R, Stinebrickner TR (2003) Working during school and academic performance. J Labor Econ 21:473–491 Van der Klaauw W (2002) Estimating the effect of financial aid offers on college enrollment: a regression-discontinuity approach. Int Econ Rev 43:1249–1288
Part-time work, school success and school leaving Christian Dustmann · Arthur van Soest
Revised: 15 March 2006 / Published online: 26 August 2006 © Springer-Verlag 2006
Abstract In this paper, we analyse part-time employment of teenagers still in full-time education, their academic performance, and their school leaving decisions. Our estimation strategy takes account of the possible interdependencies of these events and distinguishes between two alternative states to full time education: entering the labour force full time and going on to further training. We model this decision in a flexible way. Our analysis is based on data from the UK National Child Development Study, which has an unusually rich set of variables on school and parental characteristics. Our main finding is that working part time while in full-time education has only small adverse effects on exam performance for females, and no effects for males. The effect of part-time work on the decision to stay on at school is also negative, but small, and marginally significant for males, but not for females. Other important determinants of exam success as well as the continuation decision are parental ambitions about the child’s future academic career. Keywords Teenage labour supply · Educational attainment · Training JEL Classification C35 · I20 · J24
We are grateful to Steve Machin and two anonymous referees for useful comments and suggestions. C. Dustmann (B) Department of Economics, University College London, London, WC1E 6BT, UK e-mail:
[email protected] A. van Soest Tilburg University, RAND Corporation, P.O. Box 90153, 5000 LE Tilburg, The Netherlands e-mail:
[email protected]
24
C. Dustmann, A. van Soest
1 Introduction In Britain, the age of 16 marks an important milestone in the lives of young people who face a series of significant educational and labour market choices. One decision facing 16-year olds still in full-time education is whether they should work part-time or not. The age of 16 also represents the time that pupils sit their first set of public examinations, the results of which can be crucial in determining eligibility for further education and career success. Yet another choice facing the teenager is what they should do after completion of their compulsory full-time education. Should they remain in school, go into vocational training (possibly combined with work or part-time education), or join the full-time labour market? Given the importance of the choices made at 16, it is not surprising that part-time work, academic success and school-leaving decisions have been the focus of previous literature. According to the 1992 UK Labour Force Survey, one-third of 16 and 17-year olds in full-time education had a part-time job (see Sly 1993). In 1992, 23.8% of all 16-year olds in full-time education worked part time; in 2004, this percentage had increased to 28.2%.1 Micklewright et al. (1994), using data from the Family Expenditure Survey (fes), found a similar pattern of teenage working habits. Studies based on US data indicate that parttime work amongst those in full-time education is not only a UK phenomenon. For instance, Griliches (1980) analysed different data sets for the years 1966 and 1974 and found that at least 50% of all high school graduates worked and studied simultaneously. The factors affecting levels of educational attainment have also been the subject of empirical analysis. Studies have typically tended to address the question whether levels of educational attainment can be explained by differences in school quality or are due to differences in individual characteristics and parental inputs (see, for instance, Steedman 1983; Robertson and Symons 1990). Finally, concerns relating to the proportion of British teenagers remaining in education beyond the minimum school leaving age have prompted a range of studies examining the staying-on decision. Rice (1987), Micklewright et al. (1989) and Micklewright (1989) all examine the factors which influence the school leaving decision. Dustmann et al. (2003) find that class size is an important determinant. Policy concerns have arisen because of the low number of teenagers enrolling in further training courses, as discussed by Booth and Satchell (1994) in a study on the factors affecting the take up of apprenticeships. Although teenage labour supply, school performance and school-leaving decisions have all individually been the subject of extensive empirical examination, the possible links amongst the three activities have attracted less attention. There are a number of studies that have considered, for example, the effects of part-time work by those still in school on educational and occupational expectations (Griliches 1980), as well as its impact on subsequent wage rates
1 Own calculations, based on British Labour Force Survey.
Part-time work, school success and school leaving
25
(Ehrenberg and Sherman 1987). Ehrenberg and Sherman, investigating the effect of part-time work during full-time education on academic performance and school enrollment in the next year, find no effect on grade point averages, but a negative effect on next year’s enrollment probabilities. Eckstein and Wolpin (1999) find that working while in school reduces school performance. On the other hand, working part time during full time education may provide teenagers with a taste of what the labour market is like, and may allow them to make more informed career choices. Investigating effects of working while in school on future economic outcomes, Ruhm (1997) and Light (2001) find a positive correlation, while Hotz et al. (2002) argue that positive effects diminish when controlling for selection. Decisions to work part-time, school performance and educational and occupational choices may be simultaneously determined. A priori, the relationship between working part-time while still in full-time education and the school leaving decision is unclear. On the one hand, working and studying at the same time may be an indication that the teenager wishes to join the labour market as soon as possible. On the other hand, it may provide the young person with first-hand information about the negative aspects of jobs which are available for low skilled labour, and this may discourage the teenager from leaving school and taking a full-time job without further training or education. Similarly, school performance is likely to be affected by hours worked, and one would expect a negative correlation between hours worked at 16 and examination success. The possible negative effect of working part-time while being in full-time education is particularly relevant in the current debate (although on post-secondary level) about introduction of tuition fees, being discussed or already implemented in many European countries. In turn, success in public examinations at 16 will have some bearing on the decision to continue with schooling beyond the minimum leaving age, particularly if schools require pupils to have achieved a certain educational standard before allowing them to proceed any further. Thus, hours worked at 16 may have a direct effect on school leaving decisions, as well as an indirect effect through examination results. In this paper we incorporate the possible links between working part-time, school performance and school leaving decisions into a three equation model based on data taken from the third and fourth waves of the National Child Development Study (ncds). We allow the number of hours worked to affect both examination results and the school leaving decision, and we allow examination performance to influence school leaving. We model these three outcomes simultaneously. In contrast to earlier studies we differentiate between those 16-year olds who leave school to enter the labour force and those who leave to go on to further training combined with work. This is an important distinction since a large percentage of school leavers do not enter the labour market immediately. The paper is structured as follows. In the next section, we discuss the data used for the estimation. In Sect. 3, we present the econometric model. Section 4 discusses the results, and Sect. 5 concludes.
26
C. Dustmann, A. van Soest
2 Data and variables We base our analysis of participation, school success and school leaving on data taken from the National Child Development Study (ncds), which followed a cohort of individuals born during 3rd to 9th March 1958 (see Micklewright 1986 for a detailed description of the data). The same data source is used for several other studies in the UK on similar topics, such as Dolton and Vignoles (2000), Harmon and Walker (2000), Feinstein and Symons (1999), Currie and Thomas (2001), Robertson and Symons (1996), Dearden et al. (2002), and Dustmann et al. (2003). Of particular interest is the data recorded in the third and fourth sweeps of the survey (ncds3 and ncds4) and information collected in the Public Examinations Survey (pes), a follow-up survey to ncds3. ncds3 records extensive information about the respondents, such as educational and physical development, aspirations for the future, spare time activities etc., as well as much of the usual information gathered in house-hold surveys. A similar range of information was also gathered for ncds4, conducted in 1981 when cohort members were aged 23, as well as further details covering education and employment experience. We thus have a very detailed picture of each teenager and his or her family prior to and after the individual has made his or her choices at the age of 16. The ncds teenagers were the first school cohort who were legally required to stay in full time education until the age of 16. Although providing a remarkably rich source of information, the ncds is not without drawbacks. It is recognized that there have been a series of changes in the structure and organisation of schooling and further education in England and Wales over the last decades, and these may have had some impact on teenagers’ attitudes to schooling, training and work. Also, the ncds cohort reached a minimum school leaving age at a time when the youth labour market was very different in comparison with now. Despite these factors, an examination of the ncds should still yields some insights which are of relevance for education and training policies today. Despite the numerous changes in the secondary and tertiary education sectors, teenagers today still face the same threefold choice as those in 1974. As part of ncds3 individuals were asked whether they had a regular parttime job during term time and how many hours they worked per week, with the responses being recorded in a banded form. We use this information to construct a measure of weekly hours worked while still being in full time education. The data set used for estimation is based on a sub-sample of 3,427 cases out of possible 11,602 who were traced at ncds3, pes and ncds4. Differences in the educational system in Scotland restricted our analysis to teenagers living in England and Wales. Information collected at the third sweep was retrieved from four separate sources (from the cohort member, from his or her parents, from the school that the 16 year olds attended and from the teenager’s doctor) and for a number of respondents one or more of the questionnaires was not completed. The timing of ncds3 in Spring 1974 means that we observe the cohort members when they are still in full-time compulsory secondary education and just
Part-time work, school success and school leaving
27
a few months before they sat their first set of public examinations, O’ levels and Certificates of Secondary Education (cse’s), in June. On the basis of the information recorded in ncds3 alone we are unable to determine how the cohort members performed in their examinations, nor whether they decided to leave school at the first available opportunity (June 1974). Fortunately, the pes conducted in 1978 has detailed information on the examination results of some 95% of respondents to ncds3, obtained from the schools that the ncds children attended. We take as our measure of academic success the number of Ordinary level (O’level) passes achieved by the ncds cohort members by 1974. At the time of the survey, two sets of public examinations were in existence— Ordinary level examinations and Certificates of Secondary Education (cses). For O’ levels candidates were graded on a scale of A–E where C and above was considered a pass. For cses, results were graded from 1 to 5 and a Grade One was considered to be an O’level equivalent. We therefore use the term O level to include cse Grade One passes. Using also the lower cse grades is beyond the purpose of the current paper because there is no comparable level for O’ level candidates. For information on school leaving decisions, we draw on ncds4. As part of ncds4, respondents completed a month-by-month diary which recorded their economic activity. We use the information recorded in February 1975 to see whether the cohort members had, at the end of their sixteenth year, decided to continue with full-time school, or whether they had gone on to do some form of vocational training.2 An important issue was missing or incorrectly recorded information. Our final sample of 3,427 observations is considerably smaller than the total number of 11,602 individuals that are interviewed in ncds3, pes and ncds4. In Table A1 in the appendix, we report means and standard deviations for some variables for our balanced estimation sample, and for the sample of individuals reported in ncds3, pes and ncds4, dropping only the missing cases for each specific variable in isolation. Means are re-assuringly similar, suggesting that combined attrition due to missing information in variables used for our analysis is not changing the sample composition, at least based on observables. 2.1 Variables Table 1 shows the means for all the dependent variables used in our analysis, for both the male and the female sample, together with brief variable definitions. At the end of the 16th year, about 32% of both males and females have decided to stay on at school. 38% of males, but only 22% of females have enrolled in training schemes, with the remaining 46% of females and 30% of males having 2 We classify all those who have any element of training associated with their job as being in the “training” category, in addition to those enrolled in full-time training schemes. Thus, for example, an individual in part-time employment and on an apprentice scheme would be classified as being in training, as would someone who was simultaneously on a government training scheme and in part-time education.
28
C. Dustmann, A. van Soest
Table 1 Descriptive statistics, activity choice, hours worked and exam success Variable
Dep. Var.: AT16 0 1 2 HOURS None 0–3 3–6 6–9 9–12 12–15 15+ EXAM
Description
Choice of activity at end of 16th year Stay at school Enroll on training scheme Enter the labour Force Index of hours worked part-time at 16
Number of O’levels/CSE Grade One passes
Female (n = 1713) Percentage Mean
Male (n = 1714) Percentage Mean
31.560 22.090 46.340
31.800 38.020 30.180
49.550 2.580 16.890 21.620 5.000 1.840 2.520 2.207
47.960 6.380 13.810 13.600 6.430 5.020 6.800 2.433
joined the labour market full-time. Thus, although staying on rates seem to be equally distributed across males and females, a larger fraction of those who do not stay on in full time education obtain further training among males than among females. While at school, and before sitting the final examinations, nearly 1 in 2 individuals works. Of those who work, hours worked are concentrated in the 3–9 weekly hours range, with more male than female teenagers in the range above 15 hours. For exam results, we report the average number of O’levels achieved (including cse Grade Ones), which is somewhat higher for males than for females. Table 2 reports means and variable descriptions for the explanatory variables in our analysis. These include a large range of family and parental background variables, child’s school background variables, and the interest parents express in their children’s school work and educational career. We further include a measure of the child’s ability. Parental and family background variables comprise the number of older and younger siblings, labour market status and occupational level of the parents, the parents’ educational level, the income of the household,3 and a measure of the 16 year-old’s ethnicity. For the child’s school background, we use variables which specify the type of school that the 16-year old attended in 1974. During the early 1970s, a tripartite selection-based system of grammar schools, secondary modern schools and technical schools was still being used in many local authorities. Performance in the ‘eleven plus’ examination taken at age 11 or 12 was used to select pupils 3 The income information in ncds3 is recorded in a banded form. We constructed a continuous measure of income, taking into account all sources of household income, following Micklewright (1986).
29
Part-time work, school success and school leaving Table 2 Descriptive statistics, explanatory variables Variable
Description
Female (n = 1713) Male (n = 1714) Mean
Explanat. Var.: oldsib yngsib paageft L ∗ maageft L ∗ unrate E ctratio able7 logincE pawork nopa mawork paprofL paskillL passL paservL pafarmL maprofL maservL kidnoeur∗∗ comp grammar special indep singsex modern tech intpar parleave paralev paruniv
Number of older siblings Number of younger siblings Age father left full-time education Age mother left full-time education Regional unemployment rate for school leavers Child-teacher ratio at the school level % score on sum of age 7 maths and reading test Logarithm of household income Father working No father Mother working Father’s occupational class ‘professional’ Father’s occupational class ‘skilled’ Father’s occupational class ‘semi-skilled’ Father’s socioeconomic group ‘service industry’ Father’s socioeconomic group ‘Agricultural worker’ Mother occupational class ‘Professional’ Mother’s socioeconomic group ‘Service industry’ Teenager not European Teenager attends a comprehensive school (non-selective state run) Teenager attends a grammar school (higher ability state run) Teenager attends a special school (handicapped and special need children) Teenager attends a private school Teenager attends a single sex school Teenager attends a secondary modern school Teenager attends a technical school Teacher considers parents to be interested in teenager’s school work Parents want teenager to leave at 16 Parents want teenager to sit A levels Parents want teenager to go to university
Std. Dev. Mean Std. Dev.
0.4294 1.2118 4.0187 4.0099 0.0388
0.64 1.23 1.75 1.39 0.04
17.392 72.298
14.08 21.26
3.864 0.912 0.037 0.701 0.055 0.515 0.339 0.006
0.37
0.426 1.195 4.005 4.028 0.040
0.63 1.25 1.71 1.42 0.04
17.203 1.91 75.437 19.68 3.858 0.896 0.047 0.681 0.059 0.481 0.349 0.003
0.023
0.028
0.003 0.128
0.002 0.113
0.014 0.539
0.009 0.521
0.133
0.165
0.023
0.017
0.048 0.249 0.243 0.011 0.736
0.040 0.284 0.248 0.005 0.755
0.344 0.224 0.367
0.308 0.280 0.345
0.42
∗ These variables are measured on a scale from 1 to 10; 1 denotes that the parent left school aged 12 or less, 2 means the parent left school at age 13 or 14, etc E Variable excluded from examination equation. L : Variable excluded from school leaving equation
∗∗ Small cell sizes prevent the use of a finer measure of ethnicity
30
C. Dustmann, A. van Soest
into one of these school types. This system, however, was criticised because of the selection purely on the basis of performance at the age of 11 or 12. As a result, from the mid-1960s onwards, a number of local education authorities had moved away towards a system with comprehensive schools taking all children in a given local authority, regardless of their ability. We include dummy variables to reflect all these school types. As a further indicator of the quality of education that 16-year olds received we also include the pupil–teacher ratio in the school that the cohort member attends.4 To measure the parents’ interest about their offspring’s educational career, we use a variable on the opinion of the teacher on whether the parent is concerned about the teenager’s school performance, and variables which indicate whether the parents want the teenager to complete Advanced levels (A’levels) or to follow a University education. We also include a measure for the general economic situation the teenager faces when leaving school. We use the regional unemployment rate amongst school leavers in summer 1974, which reflects the level of demand for school leavers. The ncds includes the results obtained from the attainment tests in mathematics and reading comprehension that respondents sat at the ages of 7, 11 and 16. These have been used extensively in a number of studies. Such previous achievements may capture variation in unobserved ability or past inputs across children, which is likely to be correlated with current school quality measures. Hanushek et al. (1996) among others use standardised test scores to control for these differences. We include combined tests scores at the age of 7 in all three equations, on the grounds that measures of attainment at 7 are likely to be the closest proxy for the underlying ability of teenagers, and parental input early at the early stages of the life cycle. They are less ‘contaminated’ by later parental attention, quality of schooling and other factors which will determine how well a child will perform in school tests. Furthermore, the results of test scores at 7 clearly avoid any potential endogeneity problems that could arise with the test results at 16. 3 The econometric model Our model consists of three equations. The first equation explains variations in hours of work supplied on a part-time basis by 16-year olds who have yet to complete their compulsory full-time schooling. Information on hours worked was gathered at least three or four months before respondents took their O’levels and were able to leave school. The second equation explains our measure of examination success at 16. The third equation explains the school leaving decision, which is made after the exams are taken. In line with the timing of these outcomes, we treat them as sequential, with the decision to work on a part 4 This variable is derived using information on the total school roll divided by the number of full-time equivalent teachers.
31
Part-time work, school success and school leaving
time basis while being in full time education taken before examinations, and examination results determined before the decision whether or not to continue schooling is made. We take account of this structure in the specification of our estimation equations, but also allow for unobserved factors that jointly drive the three outcomes. Thus we do not model a causal effect of exam results or of the continuation decision on hours worked—since the timing implies that the unexpected part component of exam results or the attractiveness of each continuation alternative cannot affect hours worked. On the other hand, the predictable part of exam results, including factors that are unobserved to the econometrician but known by the teenager, is allowed to affect the hours of work decision. This is why the hours equation includes all the regressors which are included in the other two equations and has an error term which can be correlated to the error terms in the other equations. Similarly, exam results are not allowed to causally depend on the continuation decision, but the teenager’s prediction of the continuation decision may well have an influence on exam results. Thus the three decisions can essentially be joint in nature in the sense that the teenager’s plans for one decision can affect an earlier decision, but this is modelled in a “reduced form” way—the effects of such plans are not explicitly modelled but all the regressors and unobservables determining such plans are incorporated. On the other hand, we do model explicitly the causal effects of hours worked on exam success factors and continuation and the causal effect of exam results on the continuation decision. Hours worked are reported only as categorical information. There are seven categories, and the bounds of the categories are known (see Table 1). We therefore model this variable as a grouped regression (see Stewart 1983):5 H ∗ = XH βH + uH ; H = 3j if mj−1 < H ∗ ≤ mj , m−1 = −∞, mj = 0.5 + 3j (j = 0, . . . , 5) ,
(1) m6 = ∞.
Here H denotes the hours category, multiplied by 3 to make the scale comparable to that of actual hours worked per week. H ∗ is a latent variable, and XH is a vector of explanatory variables. The vector XH contains all variables in the model. The distribution of the error term uH is discussed below. The dependent variable in the exam equation is the number of O’level passes obtained at age 16 (see Sect. 2). This number is zero for about 50% of all individuals, and we model it as a censored regression equation: E∗ = XE βE + γE H + uE ;
E = max(E∗ , 0) .
(2)
Here E denotes the number of O’levels, E∗ is a latent variable, XE is a vector of explanatory variables, and uE is an error term. We explicitly allow exam success to causally depend on hours worked when attending school. 5 For notational convenience, the index indicating the individual is omitted throughout.
32
C. Dustmann, A. van Soest
The choice between continuing full-time education (C = 0), going into a training programme (C = 1), and entering the labour force (C = 2) may be viewed as inversely ordered by the amount of education involved. An appropriate specification is therefore an ordered response model:6 C∗ = XC βC + γC H + δC E + uC , C = 0 if C∗ < 0, C = 1 if 0 < C∗ < mC , C = 2 if C∗ > mC .
(3)
Here C∗ is a latent variable, XC is a vector of explanatory variables, and uC is an error term (with variance normalized to one). The index C∗ depends on hours worked when 16, and on exam success, with coefficients γC and δC . In the standard ordered probit model, the category bound mC > 0 is estimated as an additional parameter. We extend the standard specification by allowing mC to depend on all explanatory variables in the equation: mC = exp(XC βm + γm H + δm E) .
(4)
This leads to a model with the same degree of flexibility as the multinomial logit model, in which the alternatives are not ordered (see Pradhan and Van Soest 1995 for a comparison of the two in a similar framework). The vector of error terms u = (uH , uE , uC ) is assumed to be independent of all explanatory variables in XH , XE and XC and multi-variate normal with mean zero and covariance matrix . By means of normalisation, (3, 3) = Var(uC ) is set equal to one. If (1, 2) = 0, hours are exogenous in the exam equation. Similarly, if (1, 3) = (2, 3) = 0, hours and exam results are exogenous for the school leaving decision. If is diagonal, the three equations can be estimated separately by maximum likelihood. If is not diagonal separate estimation results in inconsistent estimates of exam- and school leaving equation due to endogeneity. Therefore, the three equations are estimated jointly by maximum likelihood. Simpler two stage estimators for the exam equation and the school leaving equation are not available in this case. The likelihood contribution of each individual is either a trivariate normal probability (if E = 0), or a univariate density multiplied by a bivariate normal (conditional) probability if E > 0. See the Appendix for the likelihood contributions. To allow for the general case without restrictions on , we have to make some identifying restrictions on the variables in XE and XC . In Table 1, those variables which are excluded from the exam equation are marked with superscript “E”; those variables which are excluded from the school leaving equation are marked with superscript “L”. To identify the coefficient of hours worked in the examination equation, we exclude the local unemployment rates and our measure for parental income from XE . The assumption that unemployment rates have no direct effect on exam results seems quite plausible. The effect 6 This ordering may be questioned if school leavers that cannot find a job are put on a training scheme. We have not explored alternative orderings here. Pradhan and van Soest (1995) compare ordered and non-ordered models and find very similar effects of the regressors.
Part-time work, school success and school leaving
33
of parental income on the child’s examination success should be reflected by the school type variables (richer parents tend to send their children to better schools), the occupational level of the parent, and the interest the parent expresses in the child’s school work. We retain all these variables in the examination equation. Our exclusion of income is based on the assumption that income has no further effect on exam success than that already captured by these variables.7 To identify the effects of hours worked and exam success in the school leaving equation, we exclude the occupational and educational status of the parents from XC . We retain, however, variables which reflect the wish of the parents that the child proceeds into higher education (variables paralev, paruniv, and parint). Our exclusions therefore imply that parents’ education and occupational status have no direct effects on the continuation decision, over and above those captured by the parents’ expressed interest in the offspring’s educational career.8
4 Results We have estimated and compared a variety of different specifications. Based on likelihood ratio tests, we come to the following conclusions: First, pooled estimation of males and females with different intercepts between both groups is rejected in favour of separate models. Second, the ordered probit specification of the school leaving equation is rejected in favour of the specification which allows for flexible thresholds. Third, specifications which do not allow for correlation in the error terms cannot be rejected against the general specification. This suggests that the rich set of conditioning variables, including our measures for ability, eliminates correlation in unobservables across the three equations. Finally, models in which hours worked enter linearly cannot be rejected against models where hours worked enter nonlinearly in exam- and school leaving equations, using dummies for the hours categories. We report results for two specifications. Model I imposes that is a diagonal matrix, thus restricting the correlation between the error terms to be equal to zero. This corresponds to separate estimation of the three equations. Model II allows for any correlation between the error terms. 7 Even conditional on school type and parental interest, parental income might affect exam success if richer parents provide more educational resources. To check this, we ran some regressions of the exam equation identifying the effect of hours worked only by the local unemployment rates, and including parental income in the equation. Parental income is not significant, with p-values of 0.27 and 0.17 for females and males respectively. Moreover, parental income is not a strong predictor for hours worked (see Table 6), so identification works mainly through local unemployment rates. 8 Excluding parental education from the staying on equation can be criticized. To test this, we esti-
mated equations for the continuation decisions with parental education included in the equation (identifying exam success only through excluding parental occupational and labour market status). The p-values for joint significance are 0.11 for males and 0.87 for females.
34
C. Dustmann, A. van Soest
Table 3 Exam equation, males and females Variable
Specification Males
Females
Model I
Constant oldsib/10 yngsib/10 pawork paprof paskil pass mawork maprof kidnoteu comp grammar indep special singsex ctratio/10 intpar paruniv paralev paageft/10 maageft/10 able7/10 hours sigma(ex) Rho(1,2)
Model II
Model I
Coeff.
t-ratio
Coeff.
t-ratio
Coeff.
−7.263 −0.878 −0.208 0.152 1.686 0.347 0.317 −0.265 0.935 −0.933 0.544 2.807 2.272 1.138 −0.293 −0.403 0.768 2.945 1.207 1.939 1.738 0.684 −0.049
−6.07 −5.33 −2.61 0.32 2.71 0.76 0.68 −1.25 0.53 −0.74 2.30 7.90 4.25 1.50 −1.17 −0.78 3.35 12.21 4.73 2.85 2.13 12.89 −2.88
−7.326 −0.920 −0.215 0.103 1.713 0.303 0.281 −0.311 1.005 −0.854 0.593 2.855 2.451 1.194 −0.280 −0.458 0.767 3.043 1.257 1.885 1.777 0.670 0.013
−5.74 −5.52 −2.53 0.22 2.72 0.66 0.60 −1.42 0.56 −0.66 2.37 7.69 4.11 1.50 −1.11 −0.86 3.33 10.90 4.83 2.76 2.17 11.94 0.20
−10.881 −0.275 −0.159 0.022 1.238 0.962 0.724 −0.019 −0.389 −0.969 0.816 2.492 2.315 1.433 0.534 −0.197 1.310 2.903 1.127 1.713 1.544 0.943 −0.022
3.150
33.40
3.171 −0.204
30.88 −0.94
2.884
Model II t-ratio
Coeff.
t-ratio
−10.25 −1.94 −2.13 0.06 2.69 2.94 2.15 −0.10 −0.05 −0.83 3.78 7.97 5.12 1.31 2.58 −0.45 6.24 12.46 5.13 2.93 2.21 17.31 −1.20
−10.877 −0.261 −0.161 0.076 1.096 0.976 0.719 −0.027 −0.272 −1.175 0.828 2.478 2.297 1.483 0.524 −0.150 1.354 2.930 1.129 1.776 1.444 0.945 −0.061
−9.95 −1.82 −2.03 0.22 2.23 2.87 2.08 −0.14 −0.02 −1.00 3.83 7.81 4.94 1.33 2.54 −0.34 6.02 12.47 5.12 2.94 2.05 16.56 −0.71
38.14
2.895 0.118
35.85 0.47
4.1 The interdependence between hours worked, exam success, and staying on decision We first discuss the parameter estimates for the variables hours worked in the exam equation, and hours worked and exam success in the school leaving equation. Table 3 presents the estimates for the exam equation. Table 4 summarizes the marginal effects of hours worked and exam success on the school leaving decision (see appendix for details on how these are computed). Consider first the exam equation (Table 3). Comparing the models I and II leads to the following conclusions. For males, the effect of hours worked on exam success is negative and significant in specifications which do not allow for correlation between the errors (model I). Estimates indicate that a ten hour increase in part-time work reduces the number of O’levels by 0.49 for males and 0.22 for females;9 the effect for females, however, is not significantly different from 9 This is for someone for whom the probability of zero O’levels can be neglected.
35
Part-time work, school success and school leaving
Table 4 Marginal effects of hours and exam results on staying on decisions, various specifications Variable
Decision Stay in School Coeff.
Males Model I hours exam Model II hours exam Rho(1, 3) Rho(2, 3) Females Model I hours exam Model II hours exam Rho(1, 3) Rho(2, 3)
Training t-ratio
Labour Market
Coeff.
t-ratio
Coeff.
t-ratio
−0.004 0.063
1.73 9.90
0.003 −0.006
1.79 0.77
0.0005 −0.0574
0.28 7.27
−0.003 0.066 0.019 0.016
0.68 5.08 0.15 0.12
0.003 −0.006
1.56 0.68
−0.000 −0.059
0.05 3.77
−0.003 0.063
1.46 9.33
0.007 0.003
3.00 0.44
−0.003 −0.066
1.15 8.09
−0.009 0.046 −0.106 −0.258
1.47 3.18 −0.64 −1.45
0.004 −0.001
1.35 0.11
0.005 −0.045
0.63 2.29
zero. If we allow for correlation in the error terms (model II), the effects turn insignificant for both males and females. The estimated correlation coefficients ρ(1, 2) are not significantly different from zero either. The null of Model I is therefore not rejected against the more general alternative Model II. For the school leaving equation, we only discuss the marginal effects on the probabilities of each of the three states, presented in Table 4. When restricting the correlation between the error terms to zero (model I), we find that the number of hours worked affects the decision to stay on at school negatively for both males and females, but only for males is the effect significant at the 10% level. Hours worked have a positive effect on entering a training scheme for both males and females. If we allow for nonzero correlation coefficients (model II), the hours variables retain their signs, but turn insignificant. The estimates of the correlation coefficients ρ(1, 3) are insignificant as well. Again, model I is therefore not rejected against the more general alternative model II. In conclusion, we find a negative effect of hours worked on exam success and the decision not to continue in full-time education for males; however, the effects on both outcomes are moderate and, in the case of the school leaving decision, at the margin of statistical significance. The effect of exam results on the staying on decision is clear-cut, and endogenization changes the estimates only slightly. According to model II, an increase by one in the number of O’levels passed decreases the probability of leaving school and joining the labour market by 5.9 and 4.5 percentage points for males and females, respectively. It increases the probability of staying on at school
36
C. Dustmann, A. van Soest
by 6.6 percentage points for males and 4.6 percent for females. The effect on joining a training scheme is insignificant for both. We conclude from these results that working part time while attending school is unlikely to have a notable effect on exam success. The effect of part-time work on the school leaving decision is moderate also, and it is insignificant in the more general model. Labour force participation while attending school thus plays a minor role for both these outcomes. In contrast, exam success does affect the school leaving decision strongly, reducing the probability that the individual joins the labor market, and increasing the probability that the individual stays on at school.
4.2 Parental background 4.2.1 Exam success Looking specifically at each equation in turn, we now examine the impact of the other variables. We first discuss the coefficients of the examination success equation, presented in Table 3. The coefficients on the school type variables give rise to results which have potentially important policy implications, given the highly controversial debate in the UK surrounding the merits of selective versus non-selective schools. We find that the type of school that the teenager attends has a significant impact on academic performance, even when differences in family background and ability have been controlled for. The base category includes teenagers attending secondary modern or technical schools (lower ability state run schools). Teenagers attending independent (selective non-state run schools) or grammar schools (higher ability state run schools) (variables grammar, indep) perform significantly better than their counterparts in non-selective state run schools. Grammar and independent schools are more likely to have 6th forms, giving them better opportunities at A level, encouraging higher achievement. Furthermore, attendance of a single sex school seems to matter only for females: it influences their exam performance significantly positive, while the effect on male performance is negative, but insignificant. These findings are consistent with the idea that whilst teenage girls tend to perform more strongly in a single sex environment, teenage boys do not.10 The dummy variables reflecting parental interest in the teenager’s education and future prospects (intpar, paruniv and paralev) are all strongly significant, with the expected signs. The estimates indicate that these parental attitudes are strongly associated with the child’s performance. According to estimates in columns 1 and 3, the fact that the parents want the teenager to take A’levels is associated with an increase in the number of O’levels by about one. Both sons
10 See Dearden et al. 2002 and Dustmann et al. 2003 for more analysis of the effect of school type on school success.
Part-time work, school success and school leaving
37
and daughters of parents who want the 16 year old to attend university have about 3 more O’levels.11 The effect of the father’s and mother’s educational background (paageft, maageft, which measure the age at which the parents left full time education) on the child’s success is likewise quite strong and significant for both samples, with similar magnitudes for mothers and fathers. Since we condition on indicators which express the parents’ interest in the child’s academic performance as well as on the child’s ability, these variables may reflect to some extent the quality of parental input. The ability measure (able7) has the expected positive sign and is strongly significant. Based on columns 1 and 3, an increase in test scores by 10 (on a scale between 1 and 100) raises the number of O’levels by 0.67 for males and 0.96 for females. For both males and females the number of older and younger siblings affects exam success negatively, with older siblings being more important. This result is in line with Becker’s (1991) hypothesis about a trade-off between the quantity and quality of children, and suggests that parental attention is reduced as family size increases. Furthermore, our results suggest birth order effects, particularly for males. Here parental attention seems to be unevenly distributed, with most being given to older children. Similar results are reported by Hanushek (1992) who shows that birth order plays an important role for childrens’ academic performance. Large negative birth order effects on child’s education are also reported in a recent study by Black, Devereux and Salvanes (2005). 4.2.2 School leaving Estimation results of the school leaving equation are presented in Table A2 in the appendix, and marginal effects for model II on the probabilities of the three outcomes for the average male and female in Table 5.12 Here both the direct effect on C∗ and the indirect effect through the threshold mC are taken into consideration (see Eq. (4) and the appendix for details). The first column presents the effect on the probability of remaining in school, the second and third on the probabilities of choosing some training programme and entering the labour market. Conditional on exam success, some school type variables retain an effect on the school leaving decision. We find that teenagers attending grammar or independent schools are more likely to remain in school beyond the age of 16, even when performance in O’levels is controlled for. Here, the school type dummies may be capturing a number of effects such as the quality of career guidance that may be available in schools of varying types. For example, peer pressure in grammar or independent schools may discourage teenagers from leaving school at the first possible opportunity. Furthermore, specialist staff employed 11 See also Feinstein and Simons (1999) for analysis of the effect of parental interest on school achievement. 12 Estimated coefficients for model I are very similar, except for the variables hours and exam, which are discussed above.
38
C. Dustmann, A. van Soest
Table 5 Marginal effects, Model II Variable
Decision Stay in School
Training
Labour Market
Coeff.
t-ratio
Coeff.
t-ratio
Coeff.
t-ratio
Males oldsib/10 yngsib/10 mawork pawork kidnoteu comp grammar indep special singsex loginc unrate ctratio/10 intpar paruniv paralev able7/10 hours exam
−0.019 −0.007 −0.022 0.018 0.071 0.048 0.086 0.206 0.092 0.057 0.009 −0.340 −0.138 0.050 0.343 0.225 0.026 −0.003 0.066
0.82 0.64 0.70 0.32 0.56 1.46 1.77 2.78 0.84 1.86 0.22 1.10 2.53 1.59 9.86 6.29 2.77 0.68 5.08
−0.009 −0.003 0.070 −0.010 0.009 −0.063 −0.272 −0.180 −0.464 0.008 0.019 0.381 −0.001 0.014 −0.151 −0.085 −0.000 0.003 −0.006
0.49 0.31 2.24 0.22 0.09 2.42 3.81 1.34 2.34 0.26 0.48 1.40 0.05 0.54 4.16 2.41 0.14 1.56 0.68
0.029 0.010 −0.047 −0.007 −0.081 0.015 0.186 −0.026 0.371 −0.066 −0.028 −0.040 0.140 −0.065 −0.192 −0.140 −0.025 −0.000 −0.059
1.53 1.19 1.70 0.17 0.83 0.56 2.49 0.18 2.48 2.24 0.77 0.15 2.34 2.54 5.14 4.26 3.02 0.05 3.77
Females oldsib/10 yngsib/10 mawork pawork kidnoteu comp grammar indep special singsex loginc unrate ctratio/10 intpar paruniv paralev able7/10 hours exam
−0.033 0.012 0.027 0.024 −0.002 0.062 0.127 0.139 −0.078 0.010 0.041 −0.588 −0.129 0.055 0.431 0.197 0.010 −0.009 0.046
1.69 0.95 0.95 0.50 0.01 1.88 2.73 1.78 0.47 0.34 1.05 1.93 1.85 1.73 11.57 5.57 0.94 1.47 3.18
−0.028 −0.027 0.000 0.027 0.223 −0.022 −0.071 0.129 0.122 −0.030 −0.011 −0.041 0.104 0.044 0.026 0.071 0.016 0.004 −0.001
1.39 2.42 0.01 0.56 2.02 0.80 1.24 1.37 1.12 0.90 0.31 0.15 1.36 1.50 0.66 2.34 2.12 1.35 0.11
0.061 0.014 −0.027 −0.052 −0.221 −0.040 −0.055 −0.268 −0.044 0.019 −0.029 0.629 0.025 −0.099 −0.457 −0.268 −0.026 0.005 −0.045
2.74 1.21 0.82 0.96 1.26 1.18 0.82 2.47 0.35 0.53 0.67 2.04 0.30 2.77 9.96 8.12 2.37 0.63 2.29
to give informed advice about education and career choices may have an effect on school-leaving decisions. The variables reflecting the interest of the parent in the teenager’s school work and the desire of the parent that the child continues education are strongly significant, with the expected sign. Parental aspirations that the child attends university or achieves A levels increase the probability of remaining at school
Part-time work, school success and school leaving
39
for males by 35 and 25 percentage points respectively. For females, the wish of the parent that the child aims for a university education increases the probability of remaining at school by 41 percentage points. These large effects suggest that even at age 16, parents can have a strong influence on the child’s educational career.13 The pupil–teacher ratio is negatively and significantly associated with the probability of staying on at school for males, conditional on the school type variables, but not for females. Dustmann et al. (2003) discuss class size effects on staying on decisions, and subsequent labour market outcomes in detail. While the effect of the number of O’levels passes obtained seems to be the same for males and females (see discussion above), the effect of the ability variables is not. For males it increases the probability of remaining in full time education, and decreases the probability of joining the labour force full time. The effect on training scheme participation is not significant. For females, the ability variable positively influences the decision to participate in training, but negatively influences the decision to join the labor force. Its effect on the decision to remain in full-time education is insignificant. This may reflect the fact that traditionally teenage girls have been pushed towards certain careers requiring vocational or other types of training (e.g. nursing or administrative jobs), irrespective, to a certain extent, of their ability levels or their academic performance. Notice that these results, though holding for the NCDS cohort, may not hold any more for females entering the labour market today. 4.2.3 Hours worked We now turn to the hours worked equation. Results for model II are reported in Table 6. Since the model is a grouped regression model, we can interpret the coefficients as marginal effects on hours worked (ignoring the censoring at zero hours). For both males and females, the number of younger siblings has a strong positive effect on the number of hours the teenager works, while the number of older siblings is insignificant. An obvious explanation is that individuals have to compete with younger siblings for the financial resources parents are able to allocate between them, while older siblings are financially more independent. Most indicators for parents’ occupational status and skill level are insignificant, with one exception—the variable which indicates that the father owns or works on a farm, which affects the labour supply of males positively. The mother’s participation in the labour market is positively associated with hours worked for both males and females, and the effect is significant at the 5% level for female teenagers and at the 10% level for males. One reason may be that women often work in positions where there are part-time work opportunities for their off-spring. It may also be that children who see their mother work may be more likely to engage in part-time work themselves, or the mother working 13 See Dustmann (2004) for a discussion of the importance of child’s age when important school track choices have to be made.
40
C. Dustmann, A. van Soest
Table 6 Hours worked equation Variable
Constant oldsib/10 yngsib/10 loginc pawork paprof paskil pass pafarm mawork maprof maserv paserv kidnoteu comp grammar indep special singsex ctratio/10 intpar paruniv paralev paageft/10 maageft/10 unrate able7/10 sigma
Specification Model II
Model II
Males
Females
Coeff.
t-ratio
Coeff.
t-ratio
0.359 0.006 0.725 0.647 0.791 −0.970 −0.393 −1.162 8.287 1.177 −3.881 −0.364 −2.127 −3.441 −1.762 −1.810 −7.409 −5.602 −1.047 −1.479 0.577 −3.627 −0.889 −0.553 −1.551 −19.245 0.465 9.048
0.09 0.01 3.46 0.76 0.72 −0.59 −0.38 −1.08 5.41 1.86 −0.68 −0.44 −0.66 −1.51 −2.79 −1.74 −4.47 −2.64 −1.49 −1.12 1.02 −5.37 −1.29 −0.28 −0.68 −3.29 3.38 32.65
−1.337 0.235 0.552 −0.245 1.772 −4.207 −1.706 −1.623 −0.535 0.962 −4.804 0.987 0.204 −6.777 −0.470 −1.270 −2.817 −4.299 −0.119 0.572 1.426 −0.263 0.376 −1.538 −1.826 −24.355 0.353 7.211
−0.42 0.78 3.34 −0.42 2.04 −3.62 −2.24 −2.07 −0.47 1.98 −0.31 1.45 0.06 −2.36 −0.93 −1.64 −2.32 −2.28 −0.23 0.46 2.94 −0.47 0.72 −0.98 −1.01 −5.36 3.03 31.15
could be capturing unmeasured disadvantage in the household requiring the children to work. The school types have the expected sign. Teenagers attending independent or grammar schools are likely to work fewer hours than those in the base category (secondary modern or technical schools). This may be because 16-year olds who go to independent or grammar schools have less free time to work part-time; they might be given more homework, be more involved in extracurricular activities or may have to travel further to attend school. Surprisingly, male teenagers in comprehensive schools seem also to work fewer hours, compared to those on modern or technical schools. Also, sons of parents who wish that their child attends university work less hours. This variable is significant for females. Finally, ability has a significant and positive effect on hours worked for both sexes, perhaps because higher ability teenagers need to spend less time studying and can better afford to work (controlling for differences in school type). On the other hand, at the other end of the spectrum one might expect
Part-time work, school success and school leaving
41
that those with low ability also want to work because of a low return to studying, but we did not find any evidence of a non-monotonic effect of ability. 5 Conclusion In this paper, we investigate the decision to work part time while still in full-time education, subsequent exam success, and career choices of 16-year old school children in a model which takes account of the possible interdependencies of these events. In particular, we allow the number of hours worked to affect both examination results and the school leaving decision, and we allow examination performance to influence school leaving. These three outcomes are sequential, with hours worked during school time observed before taking final examinations, and exam success determined before the school continuation decision is taken. We model these three events jointly, taking account of the sequential nature. We also further differentiate the school leaving decision, distinguishing between the 16-year olds who leave school to enter the labour force and those who leave to go on to further training. This distinction seems important, as, despite leaving full time education, a large fraction of school leavers enrolls on various training schemes and rather than entering the labour market immediately. We model this decision in a flexible way, considering the three choices as ordered, but allowing the threshold parameters to depend on observed characteristics. Our analysis is based on data from the third and fourth waves of the National Child Development Study (ncds). This cohort survey is unique in the detail it provides on school outcomes, parental and family background, and teenagers’ other activities. In addition, the longitudinal nature of the survey allows measurement of events over time, which is important to link the three events we investigate. Initial specification tests suggest separate estimation for males and females, and support the specification with flexible thresholds. The specification that imposes diagonality on the error structure of the three equations can not be rejected against the most general specification, suggesting that the rich set of conditioning variables absorbs correlation in unobservables across equations that is correlated with the respective outcomes. Regarding the relationship between labour supply when in full time education and school performance, we conclude that working part-time has only small adverse effects on exam performance for males, but not for females. The effect of hours worked on the decision to remain in full time education is negative, but likewise small, and marginally significant for males. We conclude from these results that working while in full time education does not have adverse impacts on school performance nor does it particularly encourage early school-leaving for females; there is some evidence of small adverse effects for males. These results are potentially important, as they suggest that the impact of part time work during school education does not lead to any larger disadvantages in school achievements. However, one should remember that our findings relate to the 1974 cohort, and may not necessarily carry over to children leaving school today.
42
C. Dustmann, A. van Soest
On the other hand, we find that strong examination performance at O’level considerably influences the school leaving decision for both males and females. These results remain virtually unchanged, whether or not we estimate the three outcome equations separately, or estimate a fully structural model. Other findings relate to the rich set of family and parental background characteristics on which we condition. We find that teenagers in larger classes tend to drop out of school earlier than those in smaller classes. This last effect prevails even when controlling for school types. Children in independent and grammar schools tend to out-perform their counter-parts in non-selective schools, even when differences in family background and individual characteristics are taken into account. Important for exam success as well as the continuation decision are parental ambitions about the child’s future academic career, both in significance level as well as in magnitude, and conditional on other parental characteristics. We also find that exam performance is negatively related to number of siblings, where differences in the effect between older and younger siblings clearly suggest birth order effects, supporting results in the recent literature. Birth order does however not affect the school continuation decision, conditional on examination outcomes. Appendix: likelihood contributions and marginal effects We only present the likelihood contributions of individuals with C = 1 (training scheme). Likelihood contributions of those with C = 0 or C = 2 are derived in a similar manner. We have to distinguish two cases: (1) H = 3j; E = 0; C = 1. The likelihood contribution is given by L = P{mj−1 < H ∗ < mj , E∗ < 0, 0 < C∗ < mC } = P{mj−1 − XH βH < uH < mj − XH βH , uE < −XE βE − γE H, −XC βC − γC H < uC < mC − XC βC − γC H} .
(5)
This can be written as a linear combination of four trivariate normal probabilities. For mC , the expression on the right-hand side of (4) can be substituted. (2) H = 3j; E = E∗ > 0; C = 1. Denote the residual in the exam equation by eE = E − XE βE − γE H. Then the likelihood contribution is given by L = fE∗ (E) P{mj−1 < H ∗ < mj < 0, 0 < C∗ < mC |E} = fuE (eE ) P mj−1 − XH βH < uH < mj − XH βH , −XC βC − γC H − δC E < uC < mC − XC βC − γC H − δC E| uE = eE
(6)
43
Part-time work, school success and school leaving Table A1 Attrition Variable
oldsib yngsib paageft maageft able7 loginc pwork mawork stayon
All Obs. NCDS3, NCDS4, PES
Estimation Sample
No. Obs.
Mean
SD
8223 8213 8106 8217 10109 6538 8340 8222 8832
1.16 1.21 4.03 3.97 65.20 3.80 0.87 0.66 0.31
1.41 1.27 1.82 1.43 21.16 0.42 0.32 0.47 0.45
No. Obs.
Mean
SD
3380 3373 3427 3427 3427 3427 3427 3427 3427
1.04 1.20 4.01 4.01 67.11 3.83 0.90 0.69 0.32
1.28 1.23 1.73 1.41 20.42 0.39 0.29 0.46 0.46
Table A2 Continuation equation, model II Variable
Parameters
Threshold mC
Males
Constant oldsib/10 yngsib/10 mawork pawork kidnoteu comp grammar indep special singsex loginc unrate ctratio/10 intpar paruniv paralev able7/10 hours exam Rho(1,3) Rho(2,3)
Parameters
Threshold mC
Females
Coeff.
t-ratio
Coeff.
t-ratio
Coeff.
t-ratio
Coeff.
t-ratio
2.071 0.069 0.026 0.082 −0.058 −0.276 −0.165 −0.293 −0.700 −0.317 −0.202 −0.034 1.180 0.488 −0.178 −1.192 −0.780 −0.095 0.011 −0.224 0.019 0.016
3.33 0.88 0.63 0.73 −0.29 −0.67 −1.44 −1.70 −2.78 −0.82 −1.85 −0.24 1.09 2.39 −1.58 −8.67 −5.91 −2.79 0.76 −5.99 0.15 0.12
0.526 −0.013 −0.003 0.165 −0.032 −0.024 −0.154 −0.625 −0.471 −1.037 −0.005 0.041 0.955 0.047 0.013 −0.457 −0.275 −0.013 0.009 −0.036
1.46 −0.28 −0.15 2.23 −0.27 −0.10 −2.34 −3.87 −1.60 −2.31 −0.07 0.43 1.47 0.68 0.19 −5.22 −3.45 −0.77 1.74 −1.82
2.168 0.113 −0.042 −0.085 −0.082 −0.032 −0.207 −0.419 −0.455 0.279 −0.040 −0.142 1.949 0.433 −0.184 −1.453 −0.665 −0.037 0.033 −0.154 −0.106 −0.258
3.05 1.77 −0.98 −0.90 −0.50 −0.05 −1.88 −2.64 −1.72 0.49 −0.38 −1.10 1.97 1.75 −1.66 −11.16 −5.39 −1.05 1.54 −3.35 −0.64 −1.45
−0.654 −0.052 −0.094 −0.016 0.049 0.650 −0.128 −0.353 0.250 0.460 −0.107 −0.072 0.474 0.436 0.085 −0.365 0.006 0.038 0.023 −0.050
−0.91 −0.74 −2.19 −0.17 0.28 1.54 −1.26 −1.82 0.80 0.99 −0.92 −0.55 0.50 1.68 0.79 −2.75 0.05 1.32 2.60 −2.05
Here fE∗ and fuE are the univariate normal densities of E∗ (conditional on exogenous variables) and uE . The conditional probability in (6) is a bivariate normal one. We use the BFGS algorithm in gauss to maximize the likelihood, and computed the standard errors from the outer products of the scores. Marginal effects in school leaving equation The computation of the marginal effects presented in Tables 4 and 5 is based on (3) and (4). For notational convenience, we write ZC = (XC , H, E),
44
C. Dustmann, A. van Soest
, γ , δ ) . We then have θC = (βC , γC , δC ) , and θm = (βm m m
∂P[C = 0|ZC ] = −fuC (−ZC θC )ZC , ∂ZC ∂P[C = 1|ZC ] = fuC (−ZC θC )ZC + fuC (mC − ZC θC )(mC − 1)ZC , ∂ZC ∂P[C = 2|ZC ] = fuC (mC − ZC θC )(1 − mC )ZC . ∂ZC
(7) (8) (9)
The effects in Tables 4 and 5 are evaluated at sample averages. Since the marginal effects are functions of the parameters, the standard errors of their estimates can be computed from the standard errors of the parameter estimates (taking the distribution of ZC as given). This can in principle be done by the delta method. A computationally easier alternative is to use simulations. The standard errors in Tables 4 and 5 are computed as the standard deviations in samples of 500 marginal effects, computed from 500 draws of the vector of parameters from the estimated asymptotic distribution of the vector of parameter estimates. References Becker GS (1981) A treatise on the family. Harvard University Press, Cambridge Becker GS, Lewis HG (1973) On the interaction between the quantity and quality of children. J Polit Econ 81 Suppl S279–S288 Behrman JR, Taubman P(1986) Birth order, schooling and earnings. J Labor Econ 4:S121–S145 Black S, Devereux P, Salvanes KG (2005) The more the merrier? The effect of family size and birth order on children’s education. Quart J Econ 120:669–700 Booth, AL, Satchell SE (1994) Apprenticeships and job tenure. Oxford Econ Papers 46:676–695 Card D, Krueger A (1992) Does school quality matter? Returns to education and the characteristics of public schools in the United States. J Polit Econ 100:1–40 Coleman JS (1966) Equality of educational opportunity, Washington Currie J, Thomas D (1999) Early test scores, socio-economic status and future outcomes. Res Labor Econ 20:103–132 Davie R (1971) Size of class, educational attainment and adjustment. Concern, No. 7, 8–14 Dearden L, Ferri J and Meghir C (2002) The effect of school quality on educational attainment and wages. Rev Econ Statist 84:1–20 Dolton P and Vignoles A (2000) The impact of school quality on labour market success in the United Kingdom mimeo, University of Newcastle Dustmann C (2004) Parental background, secondary school track choice, and wages. Oxford Econ Papers 56:209–230 Dustmann C, Rajah N, van Soest A (2003) Class size, education, and wages. Econ J 113:F99–F120 Eckstein Z, Wolpin K (1999) Why youth drop out of high school: the impact of preferences, opportunities and abilities. Econometrica 67:1295–1339 Ehrenberg RG, Sherman DR (1987) Employment while in college, academic achievement and post college outcomes. J Human Resour 22:1–23 Feinstein L, Symons J (1999) Attainment in secondary school. Oxford Economic Papers, 51: 300– 321 Griliches Z (1980) Schooling interruption, work while in school and the returns from schooling. Scand J Econ 82:291–303 Harmon C, Walker I (2000) The returns to the quantity and quality of education: evidence for men in England and Wales. Econ 67:19–36
Part-time work, school success and school leaving
45
Hanushek EA (1992) The trade off between child quantity and quality. J Polit Econ 100:84–117 Hanushek EA, Rivkin SG, Taylor LL (1996) Aggregation and the estimated effects of school resources. Rev Econ Statist 78:611–627 Hotz VJ, Xu LC, Tienda M, Ahituv A (2002) Are there returns to the wages of young men from working while in school? Rev Econ Statist 84:221–236 Light A (2001) In-school work experience and the return to schooling. J Labor Econ 19:65–93 MacLennan E, Fitz J, Sullivan S (1985) Working children. Low Pay Unit, London Micklewright J (1986) A note on household income data in ncds3, ncds user support working paper, vol 18, City University, London Micklewright J (1989) Choice at 16. Economica 56:25–39 Micklewright J, Pearson M, Smith R (1989) Has Britain an early school leaving problem? Fiscal Stud 10:1–16 Micklewright J, Rajah N, Smith S (1994) Labouring and learning: part-time work and full-time education. Nat Inst Econ Rev 2:73–85 Pradhan M, van Soest A (1995) Formal and informal sector employment in urban areas in Bolivia. Labour Econ 2:275–298 Rice PG (1987) The demand for post-compulsory education in the UK and the effects of educational maintenance allowances. Economica 54:465–476 Robertson D, Symons J (1990) The occupational choice of british children. Econ J 100:828–841 Robertson D, Symons J (1996) Do peer groups matter? Peer group versus schooling effects on academic attainment. London School of Economics, Centre for Economic Performance Discussion Paper No. 311 Ruhm C (1997) Is high school employment consumption of investment? J Labor Econ 14: 735–776 Sly F (1993) Economic activity of 16 and 17 year olds. Employment Gazette, July, pp. 307–312 Steedman J (1983) Examination results in selective and non-selective schools. National Children’s Bureau, London Stewart M (1983) On least squares estimation when the dependent variable is grouped. Rev Econ Stud 50:737–753
Time to learn? The organizational structure of schools and student achievement Ozkan Eren · Daniel L. Millimet
Accepted: 28 August 2006 / Published online: 29 September 2006 © Springer-Verlag 2006
Abstract Utilizing parametric and nonparametric techniques, we asses the impact of a heretofore relatively unexplored ‘input’ in the educational process, time allocation, on the distribution of academic achievement. Our results indicate that school year length and the number and average duration of classes affect student achievement. However, the effects are not homogeneous – in terms of both direction and magnitude – across the distribution. We find that test scores in the upper tail of the distribution benefit from a shorter school year, while a longer school year increases test scores in the lower tail. Furthermore, test scores in the lower quantiles increase when students have at least eight classes lasting 46–50 min on average, while test scores in the upper quantiles increase when students have seven classes lasting 45 min or less or 51 min or more. Keywords Student achievement · School quality · Stochastic dominance · Quantile treatment effects · Inverse propensity score weighting JEL Classification C14 · I21 · I28
O. Eren · D. L. Millimet (B) Department of Economics, Box 0496, Southern Methodist University, Dallas, TX 75275-0496, USA e-mail:
[email protected] O. Eren e-mail:
[email protected]
48
O. Eren, D. L. Millimet
1 Introduction The stagnation of student achievement over the past few decades in the United States is well-documented (e.g., Epple and Romano 1998; Hoxby 1999), despite the fact that per pupil expenditures have increased an average of roughly 3.5% per annum over the period 1890–1990 (Hanushek 1999) and that aggregate public expenditures on primary and secondary education total approximately $200 billion (Betts 2001). Given the discontinuity that exists between educational expenditures and student achievement, an important body of research has emerged attempting to discover the primary influences on student learning. However, a potentially important ‘input’ in the educational process that has been overshadowed is time allocation; specifically, time spent in school and time spent in classes. To partially address this gap, we assess the impact of several measures of the organizational structure of the learning environment on student achievement. In particular, we focus on the (1) length of the school year, (2) number of class periods per day, and (3) average length per class period. There are several reasons a priori to believe that such variables may impact student learning. First, as found in Eren and Millimet (2005), the organizational structure of the school day affects student misbehavior, as measured by the number of instances in which a student is punished for disobeying school rules, receives an in-school suspension, receives an out-of-school suspension, and skips class. Moreover, Figlio (2003) documents that disruptive student behavior adversely impacts the test performance of peers. Second, there may be advantages to different organizational structures in terms of optimally conveying information and minimizing repetitive teaching activities. Finally, school year length directly affects the amount of time students spend in school, and may impact the curriculum choices of schools. For instance, Pischke (2003) finds that shorter school years increased the probability that students had to repeat a grade level, although he finds no long-run adverse impact. From a policy perspective, the findings reported herein should be of substantive interest. Since such organizational details are well within the control of school administrators and/or state policymakers, the policy implications are obvious. Moreover, re-organization of the school day is relatively costless, especially relative to other educational policies such as reductions in class size. Altering the length of the school year, on the other hand, is not budget-neutral. For example, the Texas state legislature is currently finalizing legislation that would require all school districts to have a uniform start date for the school year after Labor Day, and end no later than June 7 (Dallas Morning News, 13 May 2005, p 20A). Proponents of such a legislative mandate argue that early school year start dates cost the state of Texas an estimated $332 million annually in foregone tourism revenue, and as much as another $10 million due to the electricity costs from cooling schools during the month of August, not to mention additional teacher salaries (Strayhorn 2000). Advocates of the earlier start date are concerned that student academic achievement would suffer from a shorter school year. Thus, empirical evidence on the link between school year
Time to learn?
49
length and student performance will help inform the current political debate, at least in Texas. To proceed, we use a nationally representative sample of tenth grade public school students from the US National Educational Longitudinal Survey (NELS) – conducted in 1990 – and assess the impact of school organization using both parametric and nonparametric techniques. First, we utilize standard regression analysis to analyze effects on the conditional mean of student test scores, restricting school organization to only an intercept effect. Second, since focusing on the (conditional) mean may mask heterogeneous effects of school organization, we extend the analysis by comparing test scores within a distributional framework, via the estimation of quantile treatment effects (QTE). Moreover, we provide a welfare-consistent method of summarizing the QTEs based on the notion of stochastic dominance (SD).1 The results are quite revealing. In particular, we reach five main conclusions. First, a longer school year is associated with higher unconditional, but not conditional, mean test scores. Second, shorter class periods, but more class per day, is associated with higher unconditional and conditional mean test scores; however, neither is large in magnitude (less than 0.1 standard deviations). Third, a longer school year and reorganization of the school day to include shorter, but more, classes are associated with higher unconditional test scores across virtually the entire distribution. Interestingly, these associations are not uniform; the magnitudes are highest around the median. Fourth, when we examine the test score distributions adjusting for covariates using inverse propensity score weighting, we find extremely heterogeneous effects from school organizational structure. Specifically, test scores at the lower quantiles are higher when there are at least eight class periods per day, with an average class period of 46–50 min. On the other hand, test scores at the upper quantiles are higher when there are seven classes per day, with an average class period of 45 min or less or 51 min or more. In addition, there is some evidence that test scores in lower quantiles are raised, while test scores in the upper quantiles are lowered, by a longer school year. Thus, a uniform start date – of the variety proposed in Texas – does not appear optimal (when considering student achievement only). Finally, while the mean-based effects of school organization are not found to be overly meaningful economically, the distributional analysis indicates that such organizational details are meaningful determinants of test performance for some students. For example, shortening classes from 46–50 to 45 min or less raises test scores above the median by roughly one-third to one-half of a standard deviation.
1 Although there exist alternative frameworks for comparing distributions (or portions of distributions), the information content provided by QTE and SD analysis has led to an increasing number of applications (see, e.g., Bitler et al. 2005; Amin et al. 2003; Abadie 2002; Bishop et al. 2000; Maasoumi and Heshmati 2000). Quantile regression (QR), in particular, is also frequently employed to assess heterogeneity in the effects of various ‘treatments’ on the (conditional) quantiles of outcomes (see, e.g., Abrevaya 2001; Arias et al. 2001; Buchinsky 2001). Similar in spirit to the current study, Levin (2001) applies QR methods to analyze the impacts of class size and peers on student test scores.
50
O. Eren, D. L. Millimet
The remainder of the paper is organized as follows. Section 2 describes the empirical approaches. Section 3 discusses the data. Section 4 presents the results. Section 5 concludes. 2 Empirical methodology 2.1 Regression approach To initially examine the data, we utilize standard regression analysis, thereby focusing on the conditional mean. Specifically, we estimate a linear regression model via OLS of the form sij = α + xij β + ORGj τ + εij ,
(1)
where sij is the test score for individual i in school j, x is a lengthy vector of individual, family, class, teacher, and school attributes, ORG is a vector of the school organization variables, and ε is a mean zero, possibly heteroskedastic, normally distributed error term. 2.2 Distributional approach 2.2.1 Quantile treatment effects Focusing on the conditional mean may mask meaningful, and policy relevant, heterogeneity across the distribution. To examine such heterogeneity, we undertake several pairwise comparisons of the distributions of test scores, distinguished by school organization, and analyze the QTE. To begin, let S0 and S1 denote two test score variables to be compared. For instance, S0 (S1 ) may represent test scores for students attending schools where the school year is 180 days N0 or less (181 days or more). {s0i }i=1 is a vector of N0 observations of S0 (denoted N
1 is an analogous vector of realizations of S1 (denoted by by Ti = 0); {s1i }i=1 Ti = 1). Let F0 (s) ≡ Pr[S0 < s] represent the cumulative density function (CDF) of S0 ; define F1 (s) similarly for S1 . The pth quantile of F0 is given by the p p p smallest value s0 such that F0 (s0 ) = p; s1 is defined similarly for F1 . Under this p p notation, the QTE for quantile p is given by p = s1 − s0 , which is simply the p , are horizontal difference between the CDFs at probability p. 2 Estimates, p obtained using the sample analogues of sj ≡ inf s {Pr[Sj ≤ s] ≥ p}, j = 0, 1 and p , as well as 90% confidence p = 0.01, . . . , 0.99. In the results below, we plot intervals based on a simple bootstrap technique, similar to Bitler et al. (2005).
2 It is important to note that the QTEs do not correspond to quantiles of the distribution of the treatment effect unless the assumption of rank preservation holds (Firpo 2005). Absent this assumption, whereby the ranking of student test scores would remain unchanged under each of the organizational structures being analyzed, the QTE simply reflects differences in the quantiles of the two marginal distributions.
Time to learn?
51
2.2.2 Test of equality In addition to examining the QTEs at each integer quantile, we test the joint null Ho : p = 0 ∀p ∈ (0, 1), or equivalently Ho : F0 = F1 , utilizing a twosample Kolmogorov–Smirnov (KS) statistic (see, e.g., Abadie 2002; Bitler et al. 2005). The test is based on the following KS statistic: deq =
N0 N1 sup |F1 − F0 |. N0 + N1
(2)
Specifically, our procedure calls for: 1.
Obtaining the empirical CDF for S0 and S1 , defined as Nj 1 FjNj (s) = I(Sj ≤ s), Nj
j = 0, 1
(3)
i=1
2.
by computing the values of F0N0 (sk ) and F1N1 (sk ), where I(·) is an indicator function and sk , k = 1, . . . , K, denotes points in the support that are utilized (K = 500 in the application). Computing deq =
N0 N1 max | F0 (sk )| . F1 (sk ) − N0 + N1 k
(4)
Inference for the test of equality of the distributions is conducted using the bootstrap procedure outlined in Abadie (2002). Specifically, we pool the two samples, resample (with replacement) from the combined sample, split the new sample into two samples, where the first N0 represent S0 and the remainder represent S1 , and compute the KS statistic. This process is repeated B times, and the p-value is given by 1 ∗ deq . I deq,b > B B
p-value =
(5)
b=1
The null hypothesis is rejected if the p-value is less than the desired significance level, say 0.10. 2.2.3 Incorporating covariates Thus far, the distributional analysis has only considered the unconditional test score distributions. However, dependence between organizational structure and other determinants of student achievement most certainly precludes one from
52
O. Eren, D. L. Millimet
inferring causation from the preceding analysis. To alleviate the bias that may arise due to selection on observables, we utilize a lengthy vector of observable determinants of student achievement and analyze the test score distributions adjusting for covariates. If unobservables are correlated with both test scores and the treatments being analyzed, then one cannot draw causal conclusions from our analysis. That said, the set of covariates (described below) is fairly exhaustive.3 To proceed, we utilize the inverse propensity score weighting procedure as applied in Bitler et al. (2005) (see also Firpo 2005). Specifically, the empirical CDF for Sj , j = 0, 1, is now computed as FjNj (s) =
N j
ωi I(Sj ≤ i=1 N j ωi i=1
s)
,
(6)
where the weights, ωi , are given by ωi =
Ti 1 − Ti + pi (xi ) pi (xi ) 1 −
(7)
and Ti is the indicator variable for a particular school organizational structure defined above, and pi (xi ) is the propensity score (i.e., the predicted likelihood of observation i attending a school with a particular school organizational structure given a set of observed attributes, xi , from a first-stage probit model). Inference is conducted using the same bootstrap procedure discussed above. The only difference is that the first-stage probit model, and resulting weights, are estimated anew during each bootstrap replication. 2.2.4 Stochastic dominance While examination of the QTEs is of great interest, policy implications may be ambiguous if equality of the CDFs is rejected and the QTEs vary in sign or statistical significance across the distribution. Thus, we also perform tests for first and second-order stochastic dominance.4 Tests for SD offer the possibility of making limited, but robust, welfare comparisons of distributions. Such comparisons are robust in that they are insensitive to the exact preference function within a large class. However, they are limited to the extent that they restrict attention to welfare functions that depend solely on the outcome of interest, and these tests certainly do not take into account other issues such as cost. That 3 If exclusion restrictions were available, one could identify the causal effect for the subpopulation of students whose value of the treatment is influenced by the instrument (known as compliers) using the method in Abadie (2002). Given the lack of instruments at this time, we instead adjust for a host of covariates and assume that selection on observables is reasonable (see, e.g., Dearden et al. 2002; Maasoumi et al. 2005). We return to this point later. 4 One could test for third order SD rankings and higher, but the interpretation becomes increasingly obtuse.
Time to learn?
53
said, SD tests are nonetheless extremely powerful as they highlight exactly what can be said about the distributions being compared. Several tests for SD have been proposed in the literature; the approach herein is based on a generalized Kolmogorov–Smirnov test.5 To begin, assuming general von Neumann–Morgenstern conditions, let U1 denote the class of (increasing) social welfare functions u such that welfare is increasing in test scores (i.e., u ≥ 0), and U2 the sub-class of functions in U1 such that u ≤ 0 (i.e., concavity). Concavity represents an aversion to inequality in student achievement; a large concentration of very high-achieving students and very low-achieving students is undesirable. Note that u refers to the welfare function of a policymaker, not the student. Under this notation, S0 first-order stochastically dominates S1 (denoted S0 FSD S1 ) iff E[u(S0 )] ≥ E[u(S1 )] for all u ∈ U1 , with strict inequality for some u, where E[·] is the expected value operator. Equivalently, F0 (z) ≤ F1 (z)
∀z ∈ Z, with strict inequality for some z,
(8)
where Z denotes the union of the supports of S0 and S1 . Condition (8) may be alternatively stated as p ≤ 0
∀p ∈ (0, 1), with strict inequality for some p.
(9)
If S0 FSD S1 , then the expected social welfare from S0 is at least as great as that from S1 for all increasing welfare functions, with strict inequality holding for some utility function(s) in the class. The distribution of S0 second order stochastically dominates S1 (denoted as S0 SSD S1 ) iff E[u(S0 )] ≥ E[u(S1 )] for all u ∈ U2 , with strict inequality for some u. Equivalently, z
z F0 (v) dv ≤
−∞
F1 (v) dv
∀z ∈ Z, with strict inequality for some z
(10)
−∞
or p v dv ≤ 0
∀p ∈ (0, 1), with strict inequality for some p.
(11)
0
If S0 SSD S1 , then the expected social welfare from S0 is at least as great as that from S1 for all increasing and concave utility functions in the class U2 , with strict inequality holding for some utility function(s) in the class. FSD implies SSD and higher orders. 5 Maasoumi and Heshmati (2000) provide a brief review of the development of alternative tests.
54
O. Eren, D. L. Millimet
To test for FSD and SSD, we utilize the following generalizations of the Kolmogorov–Smirnov test criteria: N0 N1 min sup [F0 (z) − F1 (z)] , (12) d= N0 + N1 z∈Z z N0 N1 s= min sup (13) [F0 (v) − F1 (v)] dv, N0 + N1 z∈Z −∞
where min is taken over F0 − F1 and F1 − F0 , in effect performing two tests in order to leave no ambiguity between the equal and unrankable cases. Specifically, our procedure calls for: 1. 2. 3. 4. 5.
Computing the empirical CDFs using either (3) and (6), depending on if one wishes to adjust for covariates, at sk , k = 1, . . . , K. F0N0 (sk ) − F1N1 (sk ) and d2 (sk ) = Computing the differences d1 (sk ) = F0N0 (sk ). F1N1 (sk ) −
N1 min {max{d1 }, max{d2 }}. Obtaining d = NN00+N 1 j j Calculating the sums s1j = k=1 d1 (sk ) and s2j = k=1 d2 (sk ), j = 1, . . . , k.
N1 min max{s1j }, max{s2j } . Obtaining s = NN00+N 1
If d ≤ 0 and max{d1 } < 0, then S0 is observed to first-order dominate S1 ; if d > 0, then there is no d ≤ 0 and max{d2 } < 0, then the reverse is observed. If observed ranking in the first-order sense. Similar interpretations are given to s, max{s1j }, max{s2j } with respect to second-order dominance. Inference is conducted using two different bootstrap procedures to evaluate the null of FSD (SSD), which is equivalent to Ho : d ≤ 0 (Ho : s ≤ 0). The first follows Abadie (2002), and is identical to the approach described above for the test of equality. Specifically, we pool the two samples, resample (with replacement) from the combined sample, randomly split the new sample into two samples, and compute the test statistics in (12) and (13). This process, which approximates the distribution of the test statistics under the least favorable case (LFC) of F0 = F1 , is repeated B times, and the p-value is given by (5). The null is rejected if the p-value is less than the desired significance level. We refer to this procedure as the equal bootstrap. However, as noted in Maasoumi and Heshmati (2005) and Linton et al. (2005), the boundary between the null and alternative hypotheses is much larger than the LFC region. As such, bootstrapbased tests imposing the LFC are not asymptotically similar on the boundary, implying that the test is biased. In particular, if d = 0 or s = 0 is true, but the LFC fails to hold, the test will not have the appropriate asymptotic size. Thus, we utilize a second procedure following Maasoumi and Heshmati (2000, 2005) and Maasoumi and Millimet (2005). Now, we resample (with replacement) from each individual sample, S0 and S1 . Thus, this procedure does not impose the LFC (or any other portion of the null). Consequently, we do not form p-values using (5). Instead, under this resampling scheme, if Pr{ d∗ ≤ 0} is large, say 0.90
Time to learn?
55
or higher, and d ≤ 0, we infer FSD to a desirable degree of confidence. This is a classic confidence interval test; we are assessing the likelihood that the event d ≤ 0 has occurred. Pr{ s∗ ≤ 0} is interpreted in similar fashion. We refer to this procedure as the simple bootstrap. In light of the various test statistics and methods of inference, we provide a numerical example to illustrate the interpretation of the results. Let max{ d1 } = s1 } = 10, and max{ s2 } = −1. From (2), (12) and (13), 2, max{ d2 } = 1, max{ respectively, and ignoring the multiplication by the scalar reflecting sample d = 1, and s = −1. The fact that d > 0 implies that size, it follows that deq = 2, the empirical CDFs cross at least once; s < 0 (with max{ s2 } < 0) implies that the empirical CDFs exhibit a SSD ranking favoring F1 . Suppose the bootstrap deq } = Pr{ deq ∗ > 2} = 0.01, procedure for the test of equality yields Pr{ deq ∗ > implying that only one percent of the bootstrap test statistics are greater than the sample test statistic. Further, suppose Pr{ d∗ > d} = Pr{ d∗ > 1} = 0.03 using the equal bootstrap (which imposes the LFC), and Pr{ d∗ < 0} = 0.00 using the simple bootstrap (which does not impose the LFC). Based on these bootstrap frequencies, we reject equality of two distributions at the 95% confidence level deq } = 0.01 < 0.05). However, the equal and simple bootstrap (since Pr{ deq ∗ > results both indicate a lack of FSD; the equal bootstrap implies rejection of the null of FSD at the 95% confidence level (since Pr{ d∗ > d} = 0.03 < 0.05) and the simple bootstrap provides no evidence that the test statistic lies in the (non-positive) interval necessary for FSD (since Pr{ d∗ < 0} = 0.00). In terms of inference concerning s, suppose the equal bootstrap yields Pr{ s∗ > s} = Pr{ s∗ > −1} = 0.99, and the simple bootstrap yields Pr{ s∗ < 0} = 0.94. Based on these bootstrap frequencies, both procedures indicate that observed SSD ranking is statistically significant. Specifically, the equal bootstrap fails to reject the null of SSD since the p-value, 0.99, is greater than any conventional level of significance (e.g., 0.05 or 0.10); the simple bootstrap indicates a high probability, 94%, that the test statistic lies in the (non-positive) interval necessary for SSD. Note, if it were the case that the p-value from the equal bootstrap were above, say, 0.10 while the simple bootstrap returned a frequency of a non-positive test statistic of less than 0.90, then the two bootstrap procedures yield conflicting evidence regarding the statistical significance of the SSD ranking. This is a frequent occurrence in our results below, and may be attributable to the bias that arises from imposing the LFC. In this case, our preference is to be conservative and favor the simple bootstrap result since it does not impose the LFC.
3 Data The data are obtained from the National Education Longitudinal Study of 1988 (NELS:88), a large longitudinal study of eighth grade students conducted by the National Center for Education Statistics (NECS). The NELS:88 sample was chosen in two stages. In the first stage, a total number of 1,032 schools were selected from a universe of approximately 40,000 schools. In the second stage, up to 26 students were selected from each of the sample schools based on
56
O. Eren, D. L. Millimet
race and gender. The original sample, therefore, contains approximately 25,000 eighth grade students. Follow-up surveys were administered in 1990, 1992, 1994 and 2000. To measure academic achievement, students were administered cognitive tests in reading, social science, mathematics and science during the base year (eighth grade), first follow-up (tenth grade), and second follow-up (twelfth grade). Each of the four grade-specific tests contain material appropriate for each grade, but included sufficient overlap from previous grades to permit measurement of academic growth.6 While four test scores are available per student, teacher and class information used in the conditioning set (discussed below) are only available for two subjects per student; thus, our sample is restricted to two observations per student. 7 We utilize three categorical measures of the organizational structure of schools: (1) length of the school year, divided into two categories: 180 days or less and 181 or more days (180+ days), (2) number of class periods per school day, divided into three categories: six or fewer periods, seven periods, and eight or more periods (8+), and (3) average class length, divided into three categories: 45 min or less, 46–50 min, 51 or more minutes (51+).8 To construct the final sample, we focus on the student achievement of tenth grade public school students, pooling test scores across all four subjects (and including subject indicators in the conditioning set), as in Boozer and Rouse (2001). We include only students with non-missing test score data and the relevant school structure variables. The final sample contains 10,288 students representing 794 schools (18,135 total observations). When using inverse propensity score weighting to adjust for covariates, the first-stage probit model includes an extensive set of individual, family, teacher, class, and school characteristics: Individual: race, gender, eighth grade test score, eighth grade composite grade point average (GPA), indicator of whether the student repeated any grade. Family: father’s education, mother’s education, family composition, parents’ marital status, socioeconomic status of the family, indicators of home reading material (books and newspaper), indicator for a home computer, indicator of whether student has a specific place at home for study.9 6 We follow Boozer and Rouse (2001) and Altonji et al. (2005) and utilize the raw item response theory (IRT) scores for each test. 7 The two subjects vary across students, however, so that all four subjects are represented in the sample. 8 The NELS:88 includes six categories for length of the school year, four categories for the class periods per day and five categories for average length of the class period. To help manage the number of SD tests, we combine 1–174, 175, 176–179 and 180 days into one category; 181–184 and 185+ days into one other category; eight and 9+ class periods into one category; 1–40 and 41–45 min into one category and 51–55 and 56+ min into one category. 9 Socioeconomic status of the family ranges from −2.97 to 2.56 and was created by the administrators of the NELS:88 using the following parental questionnaires: (1) father’s education, (2) mother’s education, (3) father’s occupation, (4) mother’s occupation, and (5) family income.
Time to learn?
57
Teacher: race, gender, age, education, indicator for possessing a subjectspecific certificate, indicator for possessing a subject-specific graduate degree, indicator of whether the teacher has complete control over curriculum content, indicator of whether the teacher has complete control over disciplinary policy, indicator of whether the teacher feels very well prepared. Class: subject indicators, class size, number of minority students in the class, number of limited English proficiency (LEP) students in the class, teacher’s evaluation of the overall class achievement. School: Urban/rural status, region, total school enrollment, grade-level enrollment, average daily attendance rate, student racial composition, percentage of students receiving free lunch, percentage of students from single parent homes, average dropout rate of tenth graders prior to graduation, number of full-time teachers, number of teachers by race, teacher salaries, an indicator for whether teachers have gone on strike in the past four years, percentage of students in remedial reading and remedial math, and average eighth grade test score. Dummy variables are used to control for missing values of the individual, family, teacher, class and school controls. Prior to continuing, several comments are warranted related to the issue of selection on observables. First, lagged test score and lagged GPA proxy for innate ability, following the strategy Dearden et al. (2002), Maasoumi and Millimet (2005), and others. Lagged test scores also control for all previous inputs into the educational production process, giving the results a ‘value-added’ interpretation (Goldhaber and Brewer 1997; Todd and Wolpin 2003). In addition, we also condition on the teacher’s subjective assessment of the overall ability of the class from which the test score is taken. The ability level of the class also captures peer effects which have been shown to be important. Second, as argued in Hanushek (1979), controlling for family attributes such as socioeconomic status and parental education levels also severely mitigates any bias resulting from endogenous residential choice. Third, the inclusion of a host of school-level variables, as well as actual class size, adjusts for many attributes that are likely to play a significantly more prominent role in residential choice decisions than the types of treatments analyzed herein. Moreover, controls for the percentage of students in remedial math and reading reflect the ability of the student body as a whole, which may influence school organizational structure. Finally, Goldhaber and Brewer (1997, p. 505), who also use data on tenth grade math test scores from the NELS:88, conclude: “Unobservable school, teacher and class characteristics are important in explaining student achievement but do not appear to be correlated with observable variables in our sample. Thus, our results suggest that the omission of unobservables does not cause biased estimates in standard educational production functions.” In the interest of brevity, Tables 1, 2, and 3 display the summary statistics for some of the more interesting variables utilized in the analysis, disaggregated by school organization; the remainder are available upon request. The summary statistics reveal substantial observable differences across students exposed to different school organizational structures. First, minorities are more likely to
58
O. Eren, D. L. Millimet
Table 1 Summary statistics by length of school year Variable
Mean (standard deviation) 180 days or less 180+ days
10th Grade test score Female Race White (1 = yes) Black (1 = Yes) Hispanic (1 = Yes) 8th Grade test score Socioeconomic Status of the family Father’s education Percentage of daily attendance rate Percentage of students in remedial math Percentage of students receiving free lunch Class size Minority students in class Teacher’s race White (1 = Yes) Black (1 = Yes) Hispanic (1 = Yes) Percentage of teacher’s holding subject-specific graduate degree Teacher’s control over discipline (1 = complete control) Observations Fraction of sample
50.917 (9.900) 0.507 (0.499)
52.115 (9.956) 0.524 (0.499)
0.776 (0.416) 0.110 (0.313) 0.069 (0.254) 51.270 (9.907) −0.094 (0.722) 13.325 (3.247) 92.714 (5.421) 8.797 (9.325) 22.321 (19.124) 23.503 (6.782) 4.995 (7.295)
0.803 (0.397) 0.082 (0.275) 0.062 (0.241) 52.303 (9.998) 0.065 (0.728) 13.978 (3.294) 92.791 (3.290) 6.781 (6.902) 16.287 (17.987) 23.813 (6.818) 4.234 (6.332)
0.896 (0.305) 0.053 (0.225) 0.015 (0.122) 0.276 (0.447) 0.930 (0.253) 15429 0.850
0.919 (0.271) 0.025 (0.156) 0.006 (0.077) 0.296 (0.456) 0.935 (0.245) 2706 0.150
Appropriate panel weights utilized. The variables listed are only a subset of those utilized in the analysis. The remainder are excluded in the interest of brevity. See the footnote to Table 4 for the full set of covariates. All sample statistics are available upon request
attend schools with a shorter school year, fewer class periods per day, and longer classes on average. Similarly, minority teachers and teachers without a graduate degree in the subject they are teaching have a higher representation in these schools as well. Second, variables related to economic status (e.g., socioeconomic status, fraction of students receiving free lunch, and father’s education) indicate that students attending schools with a shorter school year, seven class periods per day, and class longer than 51 min on average are worse off. Finally, variables related to student achievement (e.g., tenth and eighth grade test scores) follow a similar pattern, indicating that students attending schools with a shorter school year, fewer class periods per day, and longer classes on average perform worse. Given the sizeable differences in observables across school organizational structures, one would expect the unconditional results to differ substantially from those adjusting for covariates. 4 Results 4.1 Regression Results OLS estimates of (1) are displayed in Table 4; heteroskedasticity robust standard errors are given beneath the coefficients. The point estimates suggest a
Time to learn?
59
Table 2 Summary statistics by number of class periods per day Variable
Mean (standard deviation) 0–6 Class periods 7 Class periods 8+ Class periods
10th Grade test score Female Race White (1 = Yes) Black (1 = Yes) Hispanic (1 = Yes) 8th Grade test score Socioeconomic status of the family Father’s education Percentage of daily attendance rate Percentage of students in remedial math Percentage of students receiving free lunch Class size Minority students in class Teacher’s race White (1 = Yes) Black (1 = Yes) Hispanic (1 = Yes) Percentage of teacher’s holding Subject-specific graduate degree Teacher’s control over Discipline (1 = complete control) Observations Fraction of sample
50.336 (9.985) 0.515 (0.499)
51.007 (9.864) 0.512 (0.499)
52.841 (9.641) 0.494 (0.500)
0.735 (0.440) 0.127 (0.333) 0.080 (0.271) 50.877 (9.971) −0.061 (0.738) 13.437 (3.344) 92.803 (5.083) 8.631 (9.365) 20.649 (18.677) 24.670 (6.667) 6.224 (7.497)
0.793 (0.404) 0.103 (0.304) 0.067 (0.250) 51.145 (9.833) −0.096 (0.730) 13.309 (3.278) 92.923 (3.621) 8.914 (9.548) 23.695 (19.905) 23.620 (6.620) 4.473 (6.990)
0.852 (0.354) 0.065 (0.247) 0.044 (0.205) 53.049 (9.822) −0.048 (0.687) 13.567 (3.050) 92.221 (7.154) 7.530 (7.202) 19.381 (18.164) 21.637 (6.851) 2.751 (6.064)
0.874 (0.331) 0.063 (0.243) 0.015 (0.124) 0.262 (0.440)
0.911 (0.284) 0.048 (0.214) 0.018 (0.136) 0.270 (0.444)
0.932 (0.250) 0.023 (0.150) 0.0009 (0.031) 0.331 (0.470)
0.925 (0.261)
0.939 (0.238)
0.930 (0.254)
8416 0.464
6113 0.337
3606 0.198
Appropriate panel weights utilized. The variables listed are only a subset of those utilized in the analysis. The remainder are excluded in the interest of brevity. The full set of sample statistics are available upon request
negative impact on student test scores from a longer school year (τ180+ = −0.088, s.e. = 0.148), a positive impact of structuring the school day to include more class periods (τ7 = 0.267, s.e. = 0.157; τ8+ = 0.262, s.e. = 0.208), and a negative impact of having longer class periods (τ46−50 = −0.462, s.e. = 0.176; τ51+ = −0.745, s.e. = 0.206). However, only the final estimates with respect to the average length of a class period, and the effect of seven class periods (relative to fewer than seven), are statistically significant. Thus, the ceteris paribus effect of changing from the modal organizational structure (six or fewer classes that on average last 51 min or more) to seven classes that last 45 min or less raises student test scores by roughly (0.267 + 0.745 ∼ =) one point. Since the mean test score is approximately 50 with a standard deviation of 10, this represents an increase of approximately 2%, or one-tenth a standard deviation. Nonetheless, given that such reorganization is seemingly costless – changing from six 55-min classes to seven 45-min classes would reduce total in-class time by 15 min per day – makes this is an important finding. To examine if such effects are heterogeneous across the test score distribution, we now turn to the distributional results.
60
O. Eren, D. L. Millimet
Table 3 Summary statistics by average class length Variable
10th Grade test score Female Race White (1 = Yes) Black (1 = Yes) Hispanic (1 = Yes) 8th Grade test score Socioeconomic status of the family Father’s education Percentage of daily attendance rate Percentage of students in remedial math Percentage of students receiving free lunch Class size Minority students in class Teacher’s race White (1 = Yes) Black (1 = Yes) Hispanic (1 = Yes) Percentage of teacher’s holding Subject-specific graduate degree Teacher’s control over Discipline (1 = complete control) Observations Fraction of sample
Mean (standard deviation) 1–45 min
46–50 min
51+ min
52.511 (9.970) 0.503 (0.500)
51.740 (9.659) 0.510 (0.499)
50.042 (9.937) 0.512 (0.499)
0.814 (0.388) 0.080 (0.271) 0.050 (0.218) 52.456 (10.129) −0.008 (0.706) 13.607 (3.112) 91.044 (7.239) 9.339 (9.625) 19.606 (20.056) 21.931 (6.413) 3.432 (6.588)
0.811 (0.390) 0.087 (0.282) 0.068 (0.252) 52.058 (9.845) −0.048 (0.741) 13.592 (3.334) 93.163 (4.772) 7.438 (8.125) 21.290 (18.004) 23.236 (6.720) 4.172 (6.701)
0.744 (0.435) 0.129 (0.335) 0.076 (0.265) 50.559 (9.811) −0.113 (0.720) 13.227 (3.273) 93.209 (3.985) 8.756 (9.221) 22.385 (19.210) 24.473 (6.841) 5.994 (7.522)
0.930 (0.253) 0.022 (0.147) 0.003 (0.058) 0.348 (0.476)
0.902 (0.296) 0.047 (0.212) 0.018 (0.135) 0.274 (0.446)
0.883 (0.321) 0.063 (0.243) 0.015 (0.123) 0.251 (0.433)
0.914 (0.279)
0.948 (0.222)
0.928 (0.256)
3705 0.204
5393 0.297
9037 0.498
Appropriate panel weights utilized. The variables listed are only a subset of those utilized in the analysis. The remainder are excluded in the interest of brevity. The full set of sample statistics are available upon request
4.2 Distributional results The QTEs based on pairwise comparisons of the unconditional (adjusted) test score distributions are displayed in the top (middle) panels of Figs. 1, 2, 3, 4, 5, 6, and 7. 10 The bottom panel in each figure plots the unconditional and the adjusted QTEs together (omitting the confidence intervals for clarity) to facilitate comparison of the magnitudes. Results from the tests of equality, FSD, and SSD using the unconditional distributions are given in Table 5; results using the inverse propensity score weighted distributions are provided in Table 6. Table 7 provides a summary of the statistical results for ease of reference. 4.2.1 School year length To assess the impact of school year length, we begin by examining the unconditional results. Figure 1 (top panel) reveals that the QTEs follow an inverted-U 10 Plots of the actual CDFs or the integrated CDFs (used in the detection of SSD) are available upon request.
Time to learn?
61
Table 4 Estimated effects of time variables on test scores (OLS) Variable School year length 180+ days (1 = Yes) Class periods 7 Periods (1 = Yes) 8+ Periods (1 = Yes) Class length 46–50 min (1 = Yes) 51+ min (1 = Yes)
Coefficient (standard error) −0.088 (0.148) 0.267 (0.157) 0.262 (0.208) −0.462 (0.176) −0.745 (0.206)
NOTES: All results use appropriate panel weights. Standard errors are corrected for arbitrary heteroskedasticity. Control set includes race, gender, 8th grade test score, 8th grade composite GPA, repeat grade, father’s education, mother’s education, family composition, parents’ marital status, socioeconomic status of the family, home reading material, home computer, specific place for study, teacher race, teacher gender, teacher age, teacher education, teacher’s subject certificate, teacher’s subject specific graduate degree, teacher’s control over curriculum content, disciplinary policy, teacher’s preparation for class, subject, class size, number of minority students in the class, number of LEP students in the class, teacher’s evaluation of the overall class achievement, urban/rural status, region, school enrollment, grade level enrollment, average daily attendance rate in the school, racial composition of the school, percentage of students receiving free lunch in the school, percentage of students from single parent homes in the school, average dropout rate of 10th graders prior to graduation, number of total full time teachers as well as by race in school, teacher salary, indicator of whether teachers have gone on strike in the past four years, percentage of students in the school in remedial reading and remedial math and average eighth grade test score at the school level
shape favoring longer school years, and are statistically significant over the majority of the distribution. Specifically, there is no difference between the CDFs in lower and upper tails, and the QTEs peak at roughly the median. In terms of magnitude, the difference at the median is roughly two points; the mean difference (see Table 1) is approximately 1.2 points. Moreover, we easily reject equality of the unconditional distributions at the p < 0.01 confidence level (Table 5, Panel A). To determine if we can make robust welfare statements, we turn to the SD results. Here, we observe no SD ranking in either the first- or seconddegree sense, despite the fact that the unconditional mean favors students exposed to a longer school year. This arises from the fact that the CDFs cross below the fifth percentile. In terms of inference, since as the simple bootstrap indicates that the probability that d ≤ 0 and s ≤ 0 are very low, this suggests that there is no statistically meaningful FSD or SSD ranking. The equal bootstrap, on the other hand, yields p-values well above 0.10, indicating that one cannot reject the null of FSD or SSD when one approximates the distribution of the test statistics under the LFC. In the end, then, we conclude that the unconditional test score distributions are statistically different across students attending schools with shorter versus longer school years; however, there is no conclusive evidence of SD in either the first- or second-degree sense. Nonetheless, over the majority of the distribution, there
Y
0.001 0.000 0.000 2.208 5.429 4.748
p = 0.000 2.404 p = 0.000 5.853 p = 0.000 4.011 p = 0.000 0.025 p = 0.000 0.000 p = 0.000 0.083
0.025 0.104 0.000 0.512 0.083 0.008
0.001 0.044 0.000 0.588 0.000 0.328
0.145 0.000
0.992 1.000 0.968
1.000 1.000 1.000
0.932
2.980
s1,MAX
−0.005 0.000 0.000
0.064
0.944 0.980 0.964
0.980 0.992 0.968
0.496
Pr{s∗ ≤ 0} Pr{s∗ ≥ s} (simple boot) (equal boot)
0.000 0.632 0.000 1.000 −0.003 0.944
2.980
s
407.236 −0.005 0.544 1400.592 0.000 0.996 1092.081 0.000 0.728
442.127 0.000 1393.277 0.000 971.977 −0.003
636.788
Pr{d∗ ≤ 0} Pr{d∗ ≥ d} s1,MAX (simple boot) (equal boot)
Second-order dominance
All results use appropriate panel weights. Probabilities are obtained via 250 bootstrap repetitions. No observed ranking implies only that the distributions are not rankable in the first- or second-degree stochastic dominance sense. p-Values for the test of equality and using the equal boot approximate the distributions of the tests statistics when the CDFs are equal, which represents the Least Favorable Case when the null is first- or second-order dominance; the simple boot does not. See the text for further details
0.145
d1,MAX d2,MAX d
First-order dominance
p = 0.000 3.199
Observed Test of ranking equality
(A) 10th Grade school year length 180 days or less 180+ days None (B) 10th Grade school periods 0–6 Periods 7 Periods Y SSD X 0–6 Periods 8+ Periods Y FSD X 7 Periods 8+ Periods Y FSD X (C) 10th Grade average class minutes 1–45 min 46–50 min X SSD Y 1–45 min 51+ min X FSD Y 46–50 min 51+ min X SSD Y
X
Distributions
Table 5 Distributional tests of unconditional test scores
62 O. Eren, D. L. Millimet
Y
0.100 24.657 22.877 6.860 4.526 2.362
p = 0.000 7.528 p = 0.000 2.521 p = 0.000 6.864 p = 0.000 3.955 p = 0.000 0.906 p = 0.000 0.404
3.955 0.004 0.906 0.016 0.404 0.000
0.100 0.008 2.521 0.000 6.864 0.000
0.873 0.000
0.000 0.012 0.684
0.992 0.000 0.000
0.092
12.590
s1,MAX
12.590
s
0.028
248.549 0.000 0.000
488.678 595.029 354.635
248.549 0.016 0.000 0.864 0.000 0.720
0.000 0.980 0.980
0.968 0.000 0.000
0.268
Pr{s∗ ≤ 0} Pr{s∗ ≥ s} (simple boot) (equal boot)
1443.390 0.000 0.000 0.368 58.953 3540.002 58.953 0.032 259.210 2315.714 259.210 0.160
145.577
Pr{d∗ ≤ 0} Pr{d∗ ≥ d} s1,MAX (simple boot) (equal boot)
Second-order dominance
Distributions adjusted for covariates using inverse propensity score weighting, where the covariates are identical to those in Table 4 plus controls for number of periods and average class length (Panel A), school year length and average class length (Panel B), and school year length and number of periods (Panel C). See Table 5 and text for further details
0.873
d1,MAX d2,MAX d
First-order dominance
p = 0.012 1.644
Observed Test of ranking equality
(A) 10th Grade school year length 180 days or less 180+ days None (B) 10th Grade school periods 0–6 Periods 7 Periods Y SSD X 0–6 Periods 8+ Periods None 7 Periods 8+ Periods None (C) 10th Grade average class minutes 1–45 min 46–50 min None 1–45 min 51+ min X SSD Y 46–50 min 51+ min X SSD Y
X
Distributions
Table 6 Distributional tests of test scores adjusted for covariates
Time to learn? 63
64
O. Eren, D. L. Millimet
Table 7 Summary of results Distributions X
Covariates Test of First-order controlled? equality dominance
Second-order dominance
Y
(A) 10th Grade school year length 180 days 180+ days No or Less
Yes
(B) 10th Grade school periods 0–6 Periods 7 Periods No
Yes
0–6 Periods 8+ Periods No
Yes
7 Periods
8+ Periods No
Yes
(C) 10th Grade average class min 1–45 min 46–50 min No
Yes
Equality No consistent rejected evidence. SB fails to find dominance; EB fails to reject null of dominance Equality No evidence. rejected SB fails to find dominance; EB rejects null of dominance
No consistent evidence. SB fails to find dominance; EB fails to reject null of dominance No consistent evidence. SB fails to find dominance; EB fails to reject null of dominance
Equality No consistent rejected evidence. SB fails to find dominance; EB fails to reject null of dominance Equality No consistent rejected evidence. SB fails to find dominance; EB fails to reject null of dominance Equality No consistent rejected evidence. SB fails to find dominance; EB fails to reject null of dominance Equality No evidence. rejected SB fails to find dominance; EB rejects null of dominance Equality No consistent rejected evidence. SB fails to find dominance; EB fails to reject null of dominance Equality No evidence. rejected SB fails to find dominance; EB rejects null of dominance
No consistent evidence. SB fails to find dominance; EB fails to reject null of dominance No consistent evidence. SB fails to find dominance; EB fails to reject null of dominance Y SSD X. SB finds large probability of non-positive test statistic; EB fails to reject null of dominance No evidence. SB fails to find dominance; EB rejects null of dominance Y SSD X . SB finds large probability of non-positive test statistic; EB fails to reject null of dominance No evidence. SB fails to find dominance; EB rejects null of dominance
Equality No consistent rejected evidence. SB fails to find dominance; EB fails to reject null of dominance Equality No evidence. rejected SB fails to find dominance; EB rejects null of dominance
No consistent evidence. SB fails to find dominance; EB fails to reject null of dominance No evidence. SB fails to find dominance; EB rejects null of dominance
Time to learn?
65
Table 7 continued Distributions X
Covariates controlled?
Test of equality
First-order dominance
Second-order dominance
No consistent evidence. SB fails to find dominance; EB fails to reject null of dominance No consistent evidence. SB fails to find dominance; EB fails to reject null of dominance No consistent evidence. SB fails to find dominance; EB fails to reject null of dominance
X SSD Y. SB finds large probability of non-positive test statistic; EB fails to reject null of dominance No consistent evidence. SB fails to find dominance; EB fails to reject null of dominance No consistent evidence. SB fails to find dominance; EB fails to reject null of dominance
Y 1–45 min
51+ min
No
Equality rejected
46–50 min
51+ min
No
Equality rejected
Yes
Equality rejected
Distributions adjusted for covariates using inverse propensity score weighting, where the covariates are identical to those in Table 4 plus controls for number of periods and average class length (Panel A), school year length and average class length (Panel B), and school year length and number of periods (Panel C) SB simple bootstrap, EB equal bootstrap
is a statistically meaningful positive association between school year length and student achievement. To see if these results are simply reflecting observable differences between students attending schools with different school year lengths, we turn to the results adjusting for covariates. Figure 1 (middle panel) indicates that the QTEs are positive between roughly the tenth and 60th percentiles; negative nearly everywhere else. Thus, after adjusting for covariates, we find a positive (negative) impact of a longer school year on test scores at lower (higher) quantiles. However, the 90% confidence intervals include zero across virtually the entire distribution; only the negative QTEs are statistically significant above the 95th percentile. On the other hand, we reject equality of the distributions at the p < 0.02 confidence level (Table 6, Panel A). Thus, while few of the QTEs are individually statistically significant, we easily reject the null that they are jointly equal to zero. As such, we conclude that school year length does impact student achievement, and the impact is heterogeneous across the distribution. Assessing the SD tests, we find that both the simple bootstrap and the equal bootstrap both reject the null of FSD at the p < 0.10 confidence level (simple bootstrap: Pr(d∗ ≤ 0) = 0.000; equal bootstrap: Pr(d∗ ≥ d) = 0.092). The fact that we reject equality and FSD indicates that the crossing of the CDFs is statistically significant, implying that test scores in lower quantiles are raised, while test scores in the upper quantiles are lowered, from a longer school year. In terms of the SSD tests, the simple bootstrap indicates a low probability that
66
O. Eren, D. L. Millimet
s ≤ 0, while the equal bootstrap fails to reject the null when one approximates the distribution of the test statistics under the LFC. In the end, then, we conclude that, after adjusting for covariates, the test score distributions across students attending schools with shorter versus longer school years are statistically different and do cross (thereby ruling out FSD); however, there is no conclusive evidence of SSD.
4.2.2 Number of class periods The next set of results pertain to the number of class periods per day. The QTEs are plotted in Figs. 2, 3, and 4; Panel B in Tables 5 and 6 displays the statistical results. Examination of the unconditional QTEs (top panels in the figures) yields two findings. First, there is a clear ranking between the three categories; the QTEs are strictly positive for 8+ periods relative to seven periods or six or fewer periods, and are strictly positive for seven periods relative to six or fewer periods. In addition, the QTEs are statistically significant over the majority of the (if not the entire) distribution in each of the three pairwise comparisons; we easily reject equality of the unconditional distributions in each case at the p < 0.01 confidence level (Table 5, Panel B). Thus, the QTEs confirm the ranking obtained by examining the unconditional means (Table 2). Second, as with school year length, the QTEs are not uniform, instead following an inverted-U shape. For example, test scores in either tail of the distribution for schools with 8+ class periods per day are roughly one point higher than test scores for schools with six or fewer class periods; test scores around the median score are over three points higher. Turning to the SD results, Table 5 (Panel B) confirms what we already knew from the QTEs: we observe a first order ranking in each case, favoring the distribution associated with more class periods per day. In terms of inference, the simple bootstrap indicates that the probability that d ≤ 0 is relatively low in each case, suggesting that there is no statistically meaningful FSD ranking. However, the simple bootstrap does suggest a statistically significant SSD ranking of 8+ class periods per day over seven and six or fewer class periods (seven: Pr(s ≤ 0) = 0.944; six or fewer: Pr(s ≤ 0) = 1.000). The equal bootstrap, on the other hand, yields p-values well above 0.10 in all six cases (three pairwise comparisons, for FSD and SSD), indicating that one cannot reject the null of FSD or SSD when one approximates the distribution of the test statistics under the LFC. In the end, then, we conclude that the unconditional test score distributions are statistically different across students attending schools with different numbers of class periods per day, with higher test scores across virtually the entire distribution associated with schools with more classes per day. Moreover, while the evidence regarding statistically meaningful FSD rankings is mixed, there is conclusive evidence of SSD rankings favoring 8+ class periods per day. As such, any policymaker with a social welfare function that is increasing and concave in test scores would prefer the distribution associated with 8+ class periods per day.
Time to learn?
67
Next, we discuss the results adjusting for covariates. Before doing so, however, it is important to note that unlike in the unconditional analysis, the pairwise comparisons adjusting for covariates do not necessarily exhibit any transitivity property since the estimated propensity score, and thus weights, are unique to each pairwise comparison. As a result, we discuss each comparison individually. Comparing seven class periods relative to six or fewer class periods, Fig. 2 (middle panel) indicates that the QTEs are positive over the entire distribution, and statistically significant everywhere but the tails, consonant with the unconditional results. Moreover, we easily reject equality of the distributions at the p < 0.01 confidence level (Table 6, Panel B). Yet, the magnitudes are much larger than in the unconditional case; test scores are generally one-fifth to one-half a standard deviation higher at each quantile in schools with seven class periods. That said, the QTEs are not precisely estimated. In terms of the SD tests, we observe a second order ranking (as the CDFs cross in the lower tail), although the simple bootstrap indicates that the probabilities that d ≤ 0 and s ≤ 0 are very low, suggesting that there is no statistically meaningful FSD or SSD ranking. The equal bootstrap, on the other hand, yields p-values well above 0.10, indicating that one cannot reject the null of FSD or SSD when one approximates the distribution of the test statistics under the LFC. In the end, then, we conclude that the test score distributions adjusted for covariates are statistically different, with test scores from schools with seven class per day being higher than test scores from schools with fewer class periods per day at virtually every quantile. However, the evidence regarding statistically meaningful SD rankings in the first or second order sense is mixed. Comparing 8+ class periods relative to six or fewer class periods, Fig. 3 (middle panel) indicates that the QTEs are negative above the tenth percentile (favoring six or fewer classes), and marginally statistically significant everywhere above roughly the 30th percentile. Moreover, we easily reject equality of the distributions at the p < 0.01 confidence level (Table 6, Panel B). As in Fig. 2, the magnitudes are much larger than in the unconditional case, yet the confidence intervals are very wide. In terms of the SD tests, we do not observe a first or second order ranking (as the CDFs cross in the lower tail), and both the simple and equal bootstrap reject the null of FSD and SSD. Thus, we conclude that the test score distributions adjusted for covariates are statistically different, and cross in the lower tail, with the test scores at relatively high quantiles from schools with six or fewer classes per day being higher than the corresponding test scores from schools with 8+ class periods per day. Lastly, comparing 8+ class periods relative to seven class periods, Fig. 4 (middle panel) indicates that the QTEs are statistically insignificant everywhere except below roughly the tenth percentile (favoring 8+ class periods). However, we continue to easily reject equality of the distributions at the p < 0.01 confidence level (Table 6, Panel B). In terms of the SD tests, the results are identical to the previous comparison. Specifically, we do not observe a first or second order ranking, and both the simple and equal bootstrap reject the null of FSD or SSD. Thus, we conclude that the test score distributions adjusted for covariates are statistically different and cross, with test scores at extremely low quantiles
68
O. Eren, D. L. Millimet
from schools with 8+ classes per day being higher than the corresponding test scores from schools with seven class periods per day. Stepping back and viewing the results adjusted for covariates in total, we reach two conclusions. First, schools currently with six or fewer class periods per day can improve student performance at virtually every quantile by switching to seven class periods per day. Second, schools currently with seven periods per day can improve the extremely low quantiles of the test score distribution by switching to eight or more classes per day. However, given the imprecision of the estimates, we cannot rule out the possibility that the magnitudes of these effects may be small. 4.2.3 Average class length The final set of results pertain to the average length of a class period. The QTEs are plotted in Figs. 5, 6, and 7; Panel C in Tables 5 and 6 contain the statistical results. Examination of the unconditional QTEs (top panels in the figures) yields two findings. First, as with the number of class periods, there is a clear ranking between the three categories; the QTEs are strictly negative for 46–50 and 51+ min relative to 45 min or less, and are strictly negative for 51+ min relative to 46–50 min. Thus, shorter class periods are associated with higher test scores at each quantile of the distribution. In addition, the QTEs are statistically significant over the majority of the (if not the entire) distribution in each of the three pairwise comparisons. We easily reject equality of the unconditional distributions in each case at the p < 0.01 confidence level (Table 5, Panel C). As a result, the QTEs confirm the ranking obtained by examining the unconditional means (Table 3). Second, as with the previous organizational structure measures, the QTEs are not uniform, instead following a U shape, particularly in Figs. 6 and 7. This implies that there is a stronger negative association between test scores and classes that last 51 min or more around the median. For instance, the median student attending schools with classes of 45 min or less scores approximately one-third of a standard deviation higher than the median student attending schools with classes 51+ min. In terms of the SD results, Table 5 (Panel C) indicates that we observe either a first or second order ranking in each case, favoring the distribution associated with shorter classes. The simple bootstrap, however, indicates that the probabilities that d ≤ 0 and s ≤ 0 are relatively low in each case except when testing for SSD between 51+ and 45 min or less (Pr(s ≤ 0) = 0.996). The equal bootstrap, on the other hand, yields p-values well above 0.10 in all six cases (three pairwise comparisons, for FSD and SSD), indicating that one cannot reject the null of FSD or SSD when one approximates the distribution of the test statistics under the LFC. Thus, we conclude that the unconditional test score distributions are statistically different across students attending schools with different average class lengths, with test scores from schools with shorter classes being higher than test scores from schools with longer classes at virtually every quantile. In addition, although the evidence regarding statistically meaningful SD rankings is mixed, there is conclusive evidence of an SSD ranking favoring classes 45 min
Time to learn?
69
or less in duration over classes 51+ min in duration. As a result, any policymaker with a social welfare function that is increasing and concave in test scores would prefer the distribution associated with classes of 45 min or less on average over the distribution associated with classes lasting longer than 50 min on average. Lastly, we turn to the results adjusting for covariates, discussing each pairwise comparison individually, as in the previous section, since transitivity does not necessarily hold. Beginning with Fig. 5 (middle panel), we find that the QTEs continue to favor schools with classes of 45 min or less in duration relative to 46–50 min classes, except now the impact is only above the median; below the median the QTEs are not statistically significant. We easily reject the null of equal distributions at the p < 0.01 confidence level (Table 6, Panel C). In terms of the SD tests, we do not observe a FSD or SSD ranking, and both bootstrap techniques reject the null of FSD or SSD. Thus, we conclude that the test score distributions adjusted for covariates are statistically different, with shorter classes raising test scores by roughly one-third to one-half a standard deviation in the upper quantiles. However, rankings may certainly vary among policymakers with different social welfare functions even within the class U2 . Comparing classes of 51+ min relative to 45 min or less, Fig. 6 (middle panel) indicates that the QTEs are statistically insignificant across the entire distribution, although the QTEs are negative (favoring shorter classes) below roughly the 60th percentile and we do reject the null of equal distributions at the p < 0.01 confidence level (Table 6, Panel C). In addition, both bootstrap techniques reject the null of FSD; the simple bootstrap rejects SSD as well. Thus, we conclude that there is modest evidence that test scores in the lower tail of the distribution improve with shorter classes, while test scores in the upper quantiles are invariant to the choice between 51+ and 45 min or less. Comparing classes of 51+ min relative to 46–50 min yields similar findings. Specifically, Fig. 7 (middle panel) indicates that the QTEs are statistically insignificant across the majority of the distribution, although the QTEs are negative (favoring shorter classes) below roughly the 80th percentile and are modestly statistically significant between the 15th and 60th percentiles. Moreover, we reject the null of equal distributions at the p < 0.01 confidence level (Table 6, Panel C). In terms of the SD tests, the simple bootstrap rejects the null of FSD and SSD, while the equal bootstrap fails to reject the null of FSD or SSD. Thus, we conclude that there is modest evidence that test scores around the median and at lower quantiles are higher with shorter classes; evidence concerning FSD and SSD is inconclusive. Assessing the results adjusted for covariates, we reach two conclusions. First, schools currently with classes lasting 46–50 min on average can improve test scores at quantiles above the median by shortening classes to 45 min or less. Second, schools currently with classes of over 50 min on average can improve test scores around the median and at lower quantiles by shortening classes to 46–50 min. However, as with the number of class periods, we cannot rule out the possibility that the magnitudes of these effects may be small given the imprecision of the estimates.
70
2 1 0 -1
Quantile Treatment Ef fect
(180+ Days - 1 80 Days or Less)
3
O. Eren, D. L. Millimet
0
10
20
30
40
50
60
70
80
90
100
Quantile
-1
0
1
2
Upper Limit, 90% CI
-2
Quantile Treatment Ef fect
(180+ Days - 1 80 Days or Less)
Lower Limit, 90% CI QTE
0
10
20
30
40
50
60
70
80
90
100
Quantile Upper Limit, 90% CI
1 0 -1
Quantile Treatment Effect
(180+ Days - 1 80 Days or Less)
2
Lower Limit, 90% CI QTE
0
10
20
30
40
50
60
70
80
90
100
Quantile Unconditional
IPW
Fig. 1 Differences in CDFs: 180 days or less and 180 + days. Top panel uses unconditional CDFs; middle panel adjusts for covariates using inverse propensity score weighting. Bottom panel replicates the top and middle panels (omitting confidence intervals) for comparison of magnitudes
71
1 .5 0 -.5
Quantile Treatment Ef fect
(7 Periods - 0-6 Periods)
1.5
Time to learn?
0
10
20
30
40
50
60
70
80
90
100
Quantile Upper Limit, 90% CI
6 4 2 0 -2
Quantile Treatment Ef fect
(7 Periods - 0-6 Periods)
8
Lower Limit, 90% CI QTE
0
10
20
30
40
50
60
70
80
90
100
Quantile Upper Limit, 90% CI
4 3 2 1 0
Quantile Treatment Effect
(7 Periods - 0-6 Periods)
5
Lower Limit, 90% CI QTE
0
10
20
30
40
50
60
70
80
90
100
Quantile Unconditional
IPW
Fig. 2 Differences in CDFs: 0–6 periods and 7 periods. Top panel uses unconditional CDFs; middle panel adjusts for covariates using inverse propensity score weighting. Bottom panel replicates the top and middle panels (omitting confidence intervals) for comparison of magnitudes
72
3 2 1 0
Quantile Treatment Effect
(8+ Periods - 0 -6 Periods)
4
O. Eren, D. L. Millimet
0
10
20
30
40
50
60
70
80
90
100
Quantile Upper Limit, 90% CI
0 -10 -20 -30
Quantile Treatment Ef fect
(8+ Periods - 0 -6 Periods)
10
Lower Limit, 90% CI QTE
0
10
20
30
40
50
60
70
80
90
100
Quantile Upper Limit, 90% CI
0 -5 -10 -15
Quantile Treatment Effect
(8+ Periods - 0 -6 Periods)
5
Lower Limit, 90% CI QTE
0
10
20
30
40
50
60
70
80
90
100
Quantile Unconditional
IPW
Fig. 3 Differences in CDFs: 0–6 periods and 8+ periods. Top panel uses unconditional CDFs; middle panel adjusts for covariates using inverse propensity score weighting. Bottom panel replicates the top and middle panels (omitting confidence intervals) for comparison of magnitudes
73
2 1 0
Quantile Treatment Effect
(8+ Periods - 7 Periods)
3
Time to learn?
0
10
20
30
40
50
60
70
80
90
100
Quantile Upper Limit, 90% CI
10 0 -10 -20 -30
Quantile Treatment Effect
(8+ Periods - 7 Periods)
20
Lower Limit, 90% CI QTE
0
10
20
30
40
50
60
70
80
90
100
Quantile Upper Limit, 90% CI
5 0 -5 -10 -15
Quantile Treatment Effect
(8+ Periods - 7 Periods)
10
Lower Limit, 90% CI QTE
0
10
20
30
40
50
60
70
80
90
100
Quantile Unconditional
IPW
Fig. 4 Differences in CDFs: 7 periods and 8+ periods. Top panel uses unconditional CDFs; middle panel adjusts for covariates using inverse propensity score weighting. Bottom panel replicates the top and middle panels (omitting confidence intervals) for comparison of magnitudes
74
0 -1 -2 -3 -4
Quantile Treatment Ef fect
(46-50 Minutes - 1-45 Minu tes)
1
O. Eren, D. L. Millimet
0
10
20
30
40
50
60
70
80
90
100
Quantile Upper Limit, 90% CI
5 0 -5 -10 -15
Quantile Treatment Effect
(46-50 Minutes - 1-45 Minu tes)
10
Lower Limit, 90% CI QTE
0
10
20
30
40
50
60
70
80
90
100
Quantile Upper Limit, 90% CI
0 -2 -4
Quantile Treatment Effect
(46-50 Minutes - 1-45 Minu tes)
2
Lower Limit, 90% CI QTE
0
10
20
30
40
50 60 Quantile
Unconditional
70
80
90
100
IPW
Fig. 5 Differences in CDFs: 1–45 and 46–50 min. Top panel uses unconditional CDFs; middle panel adjusts for covariates using inverse propensity score weighting. Bottom panel replicates the top and middle panels (omitting confidence intervals) for comparison of magnitudes
75
-1 -2 -3 -4
Quantile Treatment Effect
(51+ Minutes - 1-45 Minutes)
0
Time to learn?
0
10
20
30
40
50
60
70
80
90
100
Quantile Upper Limit, 90% CI
2 0 -2 -4
Quantile Treatment Ef fect
(51+ Minutes - 1-45 Minutes)
4
Lower Limit, 90% CI QTE
0
10
20
30
40
50
60
70
80
90
100
Quantile Upper Limit, 90% CI
0 -1 -2 -3
Quantile Treatment Effect
(51+ Minutes - 1-45 Minutes)
1
Lower Limit, 90% CI QTE
0
10
20
30
40
50
60
70
80
90
100
Quantile Unconditional
IPW
Fig. 6 Differences in CDFs: 1–45 and 51+ min. Top panel uses unconditional CDFs; middle panel adjusts for covariates using inverse propensity score weighting. Bottom panel replicates the top and middle panels (omitting confidence intervals) for comparison of magnitudes
76
0 -1 -2 -3
Quantile Treatment Effect
(51+ Minutes - 46-50 Minutes)
1
O. Eren, D. L. Millimet
0
10
20
30
40
50
60
70
80
90
100
Quantile Upper Limit, 90% CI
0 -1 -2
Quantile Treatment Ef fect
(51+ Minutes - 46-50 Minutes)
1
Lower Limit, 90% CI QTE
0
10
20
30
40
50
60
70
80
90
100
Quantile Upper Limit, 90% CI
-.5 -1 -1.5 -2 -2.5
Quantile Treatment Effect
(51+ Minutes - 46-50 Minutes)
0
Lower Limit, 90% CI QTE
0
10
20
30
40
50
60
70
80
90
100
Quantile Unconditional
IPW
Fig. 7 Differences in CDFs: 46–50 and 51+ min. Top panel uses unconditional CDFs; middle panel adjusts for covariates using inverse propensity score weighting. Bottom panel replicates the top and middle panels (omitting confidence intervals) for comparison of magnitudes
Time to learn?
77
5 Conclusion The stagnation of academic achievement in the United States and elsewhere has given rise to a growing literature seeking to understand the determinants of student learning. Utilizing parametric and nonparametric techniques, we asses the impact of a heretofore relatively unexplored ‘input ’in the educational process, time allocation, on the conditional mean and across the distribution of tenth grade test performance. Our results indicate that the allocation of time to school, as measured by the length of the school year, as well as the allocation of time within the school day, as measured by the number and average duration of classes matters. As indicated in Table 7, for all pairwise comparisons made, regardless of whether we adjust for covariates or not, we never fail to reject equality of the test score distributions across students experiencing different organizational structures. However, the effects are not homogeneous across students; thus, a narrowly defined focus on the conditional mean masks the ‘true’ effects of such school organizational details. Moreover, the distributional analysis offers opportunities to reach a consensus in policy debates that can become very politicized, as evidenced by the Texas debate over school year length, or can illuminate exactly from where political differences arise. Specifically, our distributional analysis shows that the small effects of school organization on unconditional and conditional mean test scores are extremely misleading: quantiles above the median of the test score distribution rise when the school year is less than 180 days, but a longer school year yields higher values of lower quantiles, and the lower quantiles of the test score distribution increase when students have at least eight classes of 46–50 min on average, while the upper quantiles increase when students have seven classes lasting on average 45 min or less or 51 min or more. Thus, flexibility both within and across districts in terms of the structure of the school day and year would allow students to be sorted into their ‘optimal’ learning environment (at least in terms of maximizing test performance). However, if a ‘fixed’ organizational structure is to be chosen, the stochastic dominance analysis indicates that any policymaker that is inequality averse when it comes to test scores would prefer school days containing eight or more class periods, each lasting 46–50 min, if one considers only the unconditional distribution; no welfare conclusions are possible (in the first or second degree stochastic dominance sense) after adjusting for covariates. (see Table 7). The one caveat to these results is that they may be sensitive to the selection on observables assumption, although as stated previously our control set is quite extensive. Future work may assess the validity of this assumption as instruments become available. Acknowledgements The authors are grateful for helpful comments from two anonymous referees and the editor, Bernd Fitzenberger. Any remaining errors are our own.
78
O. Eren, D. L. Millimet
References Abadie A (2002) Bootstrap tests for distributional treatment effects in instrumental variable models. J Am Stat Assoc 97:284–292 Abrevaya J (2001) The effects of demographics and maternal behavior on the distribution of birth outcomes. Empir Econ 26:247–257 Altonji JG, Elder TE, Taber CR (2005) Selection on observed and unobserved variables: assessing the effectiveness of catholic schools. J Political Econ 113:151–184 Amin S, Rai AS, Topa G (2003) Does microcredit reach the poor and vulnerable? Evidence from Northern Bangladesh. J Develop Econ 70:59–82 Arias O, Hallock KF, Sosa-Escudero W (2001) Individual Heterogeneity in the returns to schooling: instrumental variables quantile regression using twins data. Empir Econ 26:7–40 Betts JR (2001) The impact of school resources on women’s earnings and educational attainment: findings from the national longitudinal survey of young women. J Labor Econ 19:635–657 Bishop JA, Formby JP, Zeager LA (2000) The effect of food stamp cashout on undernutrition. Econ Lett 67:75–85 Bitler MP, Gelbach JB, Hoynes HW (2005) What mean impacts miss: distributional effects of welfare reform experiments. Am Econ Rev (in press) Boozer MA, Rouse CC (2001) Intraschool variation in class size: patterns and implications. J Urban Econ 50:163–189 Buchinsky M (2001) Quantile regression with sample selection: estimating women’s return to education in the U.S. Empir Econ 26:87–113 Dearden L, Ferri J, Meghir C (2002) The effect of school quality on educational attainment and wages. Rev Econ Stat 84:1–20 Epple D, Romano RE (1998) Competition between private and public schools, vouchers, and peer group effects. Am Econ Rev 88:33–62 Eren O, Millimet DL (2005) Time to misbehave? Unpublished manuscript, Department of Economics, Southern Methodist University Figlio DN (2003) Boys named sue: disruptive children and their peers. Unpublished manuscript, Department of Economics, University of Florida Firpo SP (2005) Efficient semiparametric estimation of quantile treatment effects. Econometrica (in press) Goldhaber DD, Brewer DJ (1997) Why don’t schools and teachers seem to matter? assessing the impact of unobservables on educational productivity. J Hum Resour 32:505–523 Hanushek EA (1979) Conceptual and empirical issues in the estimation of educational production functions. J Hum Resour 14:351–388 Hanushek EA (1999) The evidence on class size. In: Mayer SE, Peterson P (eds) Earning and learning: how schools matter. Brookings Institute, Washington, DC, pp 130–168 Hoxby CM (1999) The productivity of schools and other local public goods producers. J Public Econ 74:1–30 Levin J (2001) For whom the reductions count: a quantile regression analysis of class size and peer effects on scholastic achievement. Empir Econ 26:221–246 Linton O, Maasoumi E, Whang YJ (2005) Consistent testing for stochastic dominance: a subsampling approach. Rev Econ Stud 72:735–765 Maasoumi E, Heshmati A (2000) Stochastic dominance amongst swedish income distributions. Econ Rev 19:287–320 Maasoumi E, Heshmati A (2005) Evaluating dominance ranking of PSID incomes by various household attributes. IZA Discussion paper no. 1727 Maasoumi E, Millimet DL (2005) Robust inference concerning recent trends in U.S. environmental quality. J Appl Econ 20:55–77 Maasoumi E, Millimet DL, Rangaprasad V (2005) Class size and educational policy: who benefits from smaller classes? Econ Rev 24:333–368 Pischke J-S (2003) The impact of length of the school year on student performance and earnings: evidence from the German short school year. NBER working paper no. 9964 Strayhorn CK (2000) An economic analysis of the changing school start date in Texas. Window on State Government Special Report, December Todd PE, Wolpin KI (2003) On the specification and estimation of the production function for cognitive achievement. Econ J 113:F3–F33
Who actually goes to university? Oscar Marcenaro-Gutierrez · Fernando Galindo-Rueda · Anna Vignoles
Revised: 15 March 2006 / Published online: 25 August 2006 © Springer-Verlag 2006
Abstract Access to higher education (HE) is a major policy issue in England and Wales. There is concern that children from lower socio-economic backgrounds are far less likely to get a degree. We analyse the changing association between socio-economic background and the likelihood of going to university, using data from the Youth Cohort Study (YCS), spanning the period 1994–2000. We find evidence of substantial social class inequality in HE participation but conclude that this is largely due to education inequalities that emerge earlier in the education system. Conditional on GCSE and A level performance, we find no additional role for socio-economic background or parental education in determining pupils’ likelihood of going to university.
JEL
I21 · I23 · I28
Keywords Higher education · Socio-economic gap · Education participation
O. Marcenaro-Gutierrez University of Malaga and Centre for Economic Performance, London School of Economics, Houghton Street, London, WC2A 2AE, UK F. Galindo-Rueda Centre for Economic Performance, London School of Economics, Houghton Street, London, WC2A 2AE, UK A. Vignoles (B) Bedford Group, Institute of Education, University of London, 20 Bedford Way, WC 1H OAL, London, UK e-mail:
[email protected]
80
O. Marcenaro-Gutierrez et al.
1 Introduction Access to HE is a major policy issue in the England and Wales, and indeed across the UK. There is much concern that students from lower socio-economic backgrounds are substantially less likely to acquire a university degree, as compared to students from more advantaged backgrounds. Recent research has suggested that the problem of socio-economic inequality1 in HE is long standing, and in fact worsened significantly during the late 1980s and early 1990s.2 This paper extends this field of research by considering similar issues in the mid and late 1990s.3 A number of policy changes occurred in the 1990s that caused issues around access to HE to become even more topical. For example, the introduction of up front tuition fees for degree courses in 1998, which raised fears that this might hinder access by poorer students. Despite the fact that poorer students are exempt from fees, there were many who predicted that tuition fees would be more likely to deter poorer students, further widening the socio-economic gap in HE (Callender 2003). Whilst we are unable to isolate the impact of tuition fees specifically with these data,4 we are able to model the changing association between socio-economic background and HE participation during the 1990s, clearly an important period of policy change in the English and Welsh HE sectors. Furthermore, we provide evidence on the timing of socio-economic gaps in educational attainment that emerge in the English and Welsh education system. Specifically, we ask whether educational inequalities that are related to family background actually widen in the post A level phase of the English and Welsh education system or whether conditional on attainment at GCSE and A level, family background plays no further role in determining participation in HE. This evidence is therefore very relevant to the debate on the timing of policy interventions and sequential complementarities in educational investments (Carneiro and Heckman 2003). Blanden and Machin (2003) have investigated in some detail the relationship between parental income and HE participation. They conclude that the expansion of the education system in the 1970s, through to the early 1990s, was associated with a widening of the gap in HE participation between rich and poor children. Glennerster (2001) also found evidence of a strengthening of the relationship between social class and HE participation in the early 1990s.5 1 There are of course many forms of inequality in HE participation, such as inequalities by ethnicity, gender, location or combinations of the above. Here we focus on education inequality, as measured by parental socio-economic group and also parental education level. 2 Blanden et al. (2002), Galindo-Rueda and Vignoles (2005), Machin and Vignoles (2004). 3 Our previous work on this issue specifically examined some elements of the HE participation
decision immediately before and after the introduction of tuition fees (Galindo-Rueda et al. 2004). Here we consider a longer time period and analyse a broader range of issues relating to HE participation. 4 It is not obvious how one would evaluate the impact of tuition fees in a conventional evaluation framework with existing data. 5 Erikson and Goldthorpe (1985), Erickson and Goldthorpe (1992), Saunders (1997) and Schoon et al. (2002) have examined issues relating to education and social mobility, to cite just a few.
Who actually goes to university?
81
Using two cohorts of YCS data (from 1996 and 1999) Galindo-Rueda et al. (2004) found some widening of the gap in HE participation between students from lower and higher socio-economic backgrounds in the period immediately after the introduction of tuition fees in 1998. In this paper we extend this work, focusing on a longer time period (1994–2000) and modelling the association between a wide range of individual and family background characteristics and HE participation. In particular, we investigate the extent to which the problem of inequality in HE is in fact not rooted in the HE sector itself, but is attributable to inequalities and decisions made earlier in the system. In addition to the empirical literature mentioned above, this paper relates to a burgeoning economic theoretical literature on educational inequality (e.g. Benabou 1996; De Fraja 2002, forthcoming; Fernández and Rogerson 1996; Fernández and Rogerson 1998), as well as the sociological literature on this issue (e.g. Breen and Goldthorpe 1997). The paper contributes, from an economic perspective, to the growing number of empirical studies that have investigated the relationship between socio-economic background and the likelihood of HE participation in England and Wales. Much of this empirical work, some of which has also used YCS data, has been done in a sociological framework (Jackson et al. 2004; Gayle et al 2003). The paper also contributes to a broader literature on other sources of inequality in educational attainment, which includes differences by ethnicity, gender and disability (e.g. Bradley and Taylor 2000; Buchardt 2004). The next section describes briefly trends in HE participation in England and Wales and recent changes to HE policy. Section 3 describes our data. Section 4 presents our results and Sect. 5 concludes. 2 Higher education participation and policy Education participation in England and Wales has risen steadily for the last half century (Fig. 1),6 at least as measured by the proportion of students staying on in education past the compulsory school leaving age. Substantial growth in the HE participation rate has been more recent however. In the early 1990s, there was a dramatic increase in the HE participation rate for young people (Fig. 1), partly related to the merging of the polytechnic and university sectors at that time. Subsequently HE participation continued to grow in most years but at a lower rate. The introduction of tuition fees in 19987 and the abolition of maintenance grants in 1999 does not appear to have had a major impact on aggregate HE participation. Figure 2 shows the simple trend of first-degree students (all years 6 Figure 1 is based on the DfES Age Participation Index which measures the proportion of those under 21 years in each social class participating in HE for the first time (i.e. young entrants from each social class as a percentage of all young people in each social class). 7 Prior to 1998, students did not pay for their HE courses and there was a means tested grant.
Tuition fees are currently payable before the course starts by students whose parents earn more than around £30,000 pa. Some fee exemption is given for students whose parents earn between approximately £21,000 and £30,000 pa. Student loans have replaced the grant system.
82
O. Marcenaro-Gutierrez et al. 100 90 80
Staying On at 16 Age Participation Index
70 60 50 40 30 20 10 1950 1955 1960 1965 1970 1975 1980 1985 1990 1995 2000
Fig. 1 Long-term trends in staying on at 16 and age participation index (API) Men Women
400000 350000 300000 250000 200000 150000 1994
1995
1996
1997
1998
1999
2000
2001
Fig. 2 Total number of students studying for a first degree, by gender (England and Wales)
including new entrants and full and part-time students), 8 by gender and over time, for England and Wales 9 . A slight stagnation of the upward trend in student numbers is evident following the introduction of tuition fees; the trend then resumes its upward path. Although participation has been rising during this period, there is still concern about who goes to university, which is the motivation for this paper. Historically access to HE in England and Wales has been predominantly limited to those from higher socio-economic groups. Certainly if one looks at the very top and bottom of the socio-economic scale, the situation is dire. More than three quarters of students from professional backgrounds study for a degree, compared to just 14% of those from unskilled backgrounds. Moreover, this inequality in the HE system has persisted over the last forty years. Descriptive 8 This figure is derived using HESA data. Overseas students, those who did not report a domicile
postcode, students with missing data on various fields are excluded. Full details of these samples are available from the authors. 9 The situation is similar in terms of participation in Scotland, despite differences in student funding arrangements between the two countries.
Who actually goes to university?
83
Table 1 Age Participation Index (API) (%) by social class, 1991/2–2001 Year of entry Class
1992
1993
1994
1995
1996
1997
1998
1999
2000
2001
Professional (A) Intermediate (B) Skilled non-man (C1) Skilled manual (C2) Partly skilled (D) Unskilled (E) A-C1 C2-E
71 39 27 15 14 9 40 14
73 42 29 17 16 11 43 16
78 45 31 18 17 11 46 17
80 46 31 18 17 12 47 17
82 47 32 18 17 13 48 18
79 48 31 19 18 14 48 18
72 45 29 18 17 13 45 17
73 45 30 18 17 13 45 17
76 48 33 19 19 14 48 18
79 50 33 21 18 15 50 19
Source: Department for Education and Skills Age Participation Index which measures the proportion of those under 21 years in England, Scotland and Wales of each social class participating in higher education (HE) for the first time (i.e. young entrants from each social class as a percentage of all young people in each social class)
data for the 1990s10 (Table 1), the period relevant to our study, suggest a rise in participation by all socio-economic groups and a small widening of the gap in participation rates between richer and poorer students.11 Further policy developments in HE are on the horizon. In 2004, the UK parliament narrowly passed legislation to make further changes to the funding of HE. Variable tuition fees have been proposed, i.e. fees that vary both by course and by institution. But perhaps the most important feature of the proposals, from the perspective of widening access, is that fees will be repaid after graduation via an income contingent loan system, and grants will be restored to low-income students. Given these continued policy developments in HE, the socio-economic characteristics of young people who progress into HE continue to be of wider interest. 3 Data and methodology The data set we use for our analysis is the YCS, which is a series of longitudinal surveys conducted by the Department for Education and Skills. The surveys are of a particular academic year group or “cohort”, and are carried out by contacting cohort members by post three times, at yearly intervals, when they are aged 16–17, 17–18 and 18–19. Respondents are first surveyed in the year after they are eligible to leave compulsory schooling. They are then followed up, generally over a 2-year period.12 The data collected includes information about the economic status of the young person, and in particular whether they have entered HE by age 18/19, as well as their educational background, 10 Based on the DfES Age Participation index. 11 There is a dip in the HE participation rate in 1997/1998: the participation rate for both lower
and upper socio-economic groups returns to its pre-fees level by 2000. 12 Some of the early cohorts have since been followed up to age 21 and beyond.
84
O. Marcenaro-Gutierrez et al.
qualifications, family background and other socio-economic indicators. The survey is nationally representative (England and Wales) and the sample size of each cohort is around 20,000 observations. We use cohorts 6–9, i.e., including four cohorts of individuals who were aged 18 in 1994, 1996, 1998 and 2000.13 The YCS has been used extensively as a resource to analyse educational outcomes and subsequent transitions into the labour market (Croxford 2000; Dolton et al. 2001; Gayle et al. 2000, 2002, 2003; Howieson and Payne et al. 1996; Payne 1996; Rice 1999). It is not without its faults however. Non-response and attrition are a problem in the YCS, and there has been extensive academic research on this issue (Lynn 1996). For example, our last cohort (9) started out with an initial target sample size of 22,500. In the first survey at age 16/17, the response rate was 65%. A similar response rate was also achieved in the 17-year-old and 18-year-old surveys. This means that the 18-year-old sample constitutes only 28% of the initial sample (6,304 young people).14 The extent of any attrition problem is somewhat minimised by the fact that our preferred model uses a restricted sample of higher achieving students (i.e. those with five or more grades A*-C at GCSE), since these are the students who have sufficient prior attainment to be able to proceed to HE (see below for a discussion of this issue and our estimation strategy). Whilst this does not in itself overcome the attrition issue, it is the case that higher achieving students are less likely to attrit from the sample. The characteristics of the restricted sample vary relatively little from sweep to sweep, suggesting that this group is less likely to attrit (see Appendix A for an illustration from Cohort 7). The data are re-weighted for non-response in sweep 1, to bring them in line with population estimates. Nonetheless, to the extent that attrition is not random, and that it differs in both extent and nature across the YCS cohorts, we will still have some bias in our estimates. As a check we compare the unweighted HE participation rate from the YCS with published statistics on national HE participation rates during the period. They are closely aligned. In 1994, 29% of the YCS sample was doing a degree; this compares to an official HE participation rate of 30%.15 YCS data suggests that the HE participation rate rose to 35% by the end of the period in 2000/2001; the DfES’ own estimates also suggest a 35% participation rate by 2001/2002. Thus despite the limitations of the YCS data mentioned above, we are confident that the data accurately reflect national trends in HE participation at that time. We estimate the conditional probability of participating in HE using a probit model, pooling all four cohorts. Our choice of explanatory variables is
13 We avoid using earlier cohorts since the abolition of the binary line in 1992 makes over time comparisons of participation trends in the earlier period problematic. 14 A table showing the response rates of the various cohorts is given in Appendix A Table 4, with a slight decline in the initial response rate over time. 15 Official statistics on HE participation rates are from the Department for Education and Skills
which calculates participation by young people in HE [i.e. the Age Participation Index (API)] for Great Britain. See website below: http://www.dfes.gov.uk/trends/index.cfm?fuseaction=home. showChart&cid=4&iid=23&chid=89
Who actually goes to university?
85
derived from the vast literature that focuses on the relationship between family background and child educational outcomes (e.g. Behrman 1997). We use a standard probit model such that: P(H = 1|X) = P(Xβ + ε > 0) = (Xβ) where H represents participation in HE (or not) and X is a vector of explanatory variables including the individual’s prior attainment, family background and school inputs. ε is normally distributed. Prior attainment in this case is measured at age 16 (GCSE) and as discussed our main model imposes the condition of a minimum of five grade A*-C at GCSE.16 Family background is measured by parental socio-economic background, parental education and ethnicity.17 School inputs are proxied by school type. In our basic specifications, we control for mean cohort effects by including cohort dummies. We acknowledge that our data does not include some variables that have been found to be important in determining educational attainment (Feinstein and Symons 1999), including neighbourhood context variables18 and in particular family income.19 To the extent that we have omitted explanatory variables that are correlated with the variables included in the model, the model will suffer from omitted variable bias. This is a common problem in the literature given that most data sets are not sufficiently rich to control for all the factors that have been found to influence educational attainment. It does however, make causal interpretation of the explanatory variables problematic, a point we return to below. Another issue is that the explanatory variables are correlated. Parental education level clearly influences parental socio-economic status. Likewise, socioeconomic background influences school type, as does ethnicity. For example, ethnic minority students attend schools of worse quality than do students from white households (Cook and Evans 2000; Fryer and Levitt 2005). However, we also know these variables have independent effects. Parental education, for example, moderates the effects of family characteristics on child attainment via parenting style and educational behaviours (Feinstein et al. 2004). We therefore include all these measures as independent variables and acknowledge that we are measuring conditional relationships. So, for example, by including school type at age 16, we will tend to reduce the magnitude of the association between socio-economic background and HE participation since socio-economic background may influence educational attainment via type of school attended. As this paper focuses on the additional role of these family background variables in the period after compulsory schooling this is not a major limitation and these conditional relationships are discussed in detail in our results section. 16 In some specifications we control for prior achievement at age 18 (A level). 17 We also control for gender. 18 We can only control for region. 19 Family income can affect child attainment through its impact on the resources spent on the child’s
development and through its influence on parental behaviours and practices (e.g. Brooks-Gunn and Duncan 1997).
86
O. Marcenaro-Gutierrez et al.
Our basic model therefore measures the relationship between a range of family background measures and HE participation, conditional on a minimum level of GCSE achievement. However, as has been said, we cannot assume that the relationships we observe are necessarily causal. Some of our key explanatory variables, such as parents’ socio-economic background, are potentially endogenous. Unmeasured parental characteristics and attitudes towards schooling may well influence both the child’s educational achievement and the parent’s socio-economic status. Furthermore, we do not have any measures of the individual’s inherent ability in the data set. If pupils’ ability is correlated with their socio-economic background, some of the apparent positive impact of family background on HE participation is actually attributable to such pupils being of higher ability.20 We cannot completely overcome these problems in our data. Although we do have measures of educational achievement at age 16, these are also potentially endogenous. We did explore the data extensively to try to find potential instruments for these age 16 measures of achievement, however we were unable to find any suitable variables in the data set. Instead, we use the age 16 measures of educational achievement to observe whether family background largely works through decisions made at or prior to age 18. We are already testing this to some extent by focusing on a sample of students who achieved a minimum of five good GCSEs at age 16. We test this argument further however, by including other measures of prior attainment (A levels attained and GCSE grades), to determine whether the marginal association between family background and HE participation becomes less important once these prior attainment variables are included. Of course students who anticipate that they will go on to university may work harder at GCSE and A level. That is why these measures are potentially endogenous. However, the data can at least tell us at whether family background acts on decisions and achievement at age 16 and 18, rather than at the point of entry into HE. We allow for mean changes in HE participation over time with our cohort dummy variables, with the base case being cohort 6 (potentially entering HE in 1994). We then test for changes in the marginal associations between the explanatory variables and the HE participation decision by including interactions between the cohort dummy variables and other explanatory variables, as described in more detail below. 4 Results 4.1 Descriptive statistics Descriptive statistics for the full YCS sample are shown in Table 2. The first columns describe the samples participating in HE and not participating in HE
20 The model may also suffer from other sources of omitted variables bias. For example, it does not include parental income measures as these are not available in the data.
Male Parents’ socio-economic status Professional, managerial& technical occupation Other non-manual occupations Skilled occupations – manual* Semi-skilled occupations – manual Unskilled occupations Other Ethnicity White* Black Asian Other Parental education Father degree Father at least one A level Father below one A level* Mother degree Mother at least one A level Mother below one A level*
0.42
0.20 0.18 0.38 0.12 0.03 0.09 0.93 0.02 0.05 0.00 0.12 0.08 0.80 0.07 0.09 0.84
0.36 0.26 0.27 0.06 0.01 0.04 0.93 0.01 0.06 0.00 0.29 0.10 0.61 0.17 0.15 0.68
Not participating in HE
0.44
Participating in HE
Cohort 6 (1994)
Table 2 Descriptive statistics for YCS sample
0.17 0.08 0.75 0.10 0.11 0.79
0.94 0.01 0.05 0.00
0.24 0.21 0.34 0.10 0.03 0.08
0.43
Total
0.27 0.12 0.61 0.18 0.17 0.65
0.92 0.01 0.07 0.01
0.29 0.21 0.32 0.08 0.02 0.08
0.41
Participating in HE
0.11 0.07 0.82 0.06 0.11 0.83
0.92 0.01 0.06 0.01
0.20 0.18 0.34 0.12 0.03 0.13
0.43
Not participating in HE
Cohort 7 (1996)
0.16 0.09 0.75 0.10 0.12 0.78
0.92 0.01 0.06 0.01
0.23 0.19 0.33 0.11 0.03 0.11
0.42
Total
0.31 0.12 0.57 0.21 0.18 0.61
0.91 0.01 0.07 0.01
0.39 0.28 0.19 0.07 0.02 0.05
0.43
Participating in HE
0.15 0.09 0.76 0.10 0.11 0.78
0.93 0.01 0.05 0.01
0.23 0.20 0.32 0.12 0.03 0.10
0.43
Not participating in HE
Cohort 8 (1998)
0.20 0.10 0.69 0.14 0.14 0.72
0.92 0.01 0.06 0.01
0.29 0.23 0.27 0.10 0.03 0.08
0.43
Total
0.32 0.12 0.56 0.23 0.16 0.61
0.90 0.01 0.08 0.01
0.38 0.27 0.23 0.07 0.01 0.04
0.39
Participating in HE
0.15 0.09 0.76 0.12 0.11 0.77
0.90 0.02 0.06 0.02
0.22 0.21 0.34 0.11 0.04 0.08
0.42
Not participating in HE
Cohort 9 (2000)
0.21 0.10 0.69 0.16 0.13 0.71
0.91 0.01 0.07 0.01
0.28 0.23 0.29 0.10 0.03 0.07
0.41
Total
Who actually goes to university? 87
0.29 0.60 0.03 0.04 0.04 0.14 0.10 0.41 6,696
0.14 0.67 0.97 2,730
Not participating in HE
0.21 0.54 0.10 0.01 0.14
Participating in HE
Cohort 6 (1994)
∗ Base case in subsequent regressions
Type of school attended Comprehensive age 16 Comprehensive age 18* Grammar Secondary modern Independent Highest school qualification One or two A levels Three or more A levels Five or more A-C GCSEs Observations
Table 2 continued
0.14 0.26 0.57 9,426
0.27 0.58 0.05 0.03 0.07
Total
0.12 0.55 0.96 2,343
0.24 0.49 0.08 0.02 0.17
Participating in HE
0.11 0.11 0.47 5,642
0.35 0.53 0.03 0.04 0.05
Not participating in HE
Cohort 7 (1996)
0.11 0.24 0.61 7,985
0.32 0.52 0.04 0.03 0.09
Total
0.12 0.73 0.97 3,415
0.28 0.42 0.09 0.02 0.20
Participating in HE
0.13 0.16 0.55 6,496
0.40 0.45 0.04 0.04 0.07
Not participating in HE
Cohort 8 (1998)
0.13 0.35 0.69 9,911
0.36 0.44 0.05 0.03 0.12
Total
0.11 0.72 0.98 2,186
0.24 0.51 0.09 0.01 0.15
Participating in HE
0.13 0.19 0.57 4,082
0.33 0.53 0.04 0.04 0.06
Not participating in HE
Cohort 9 (2000)
0.12 0.38 0.71 6,268
0.30 0.52 0.06 0.03 0.09
Total
88 O. Marcenaro-Gutierrez et al.
Who actually goes to university?
89
for the cohorts aged 18 in 1994. Of those aged 18 in 1994, 29% were in HE. The third column gives the total proportion of the 1994 cohort with each characteristic. The subsequent sets of columns provide the same information for the cohorts aged 18 in 1996, 1998 and 2000. Even over this relatively short period of time, there appears to have been some changes in the characteristics of those participating in HE. Of those participating in HE in 1994, 36% were from a professional, managerial or technical background. By 2000, this had risen to 38%. Similar trends can be observed when we consider parental education. Twenty-nine percent of those participating in HE in 1994 had a father with a degree: by 2000 this had risen to 32%, and a similar trend is observed for the proportion of students whose mother had a degree. Some of this change is due to a change in the overall characteristics of the sample (e.g. the proportion of the total sample with a degree educated father rose from 17 to 21% over the period). This was also a period during which GCSE and A level achievement was rising substantially. Whilst 67% of those participating in HE in 1994 had three or more A levels, this proportion had risen to 72% for those in HE in 2000. This trend reflects both rising A level achievement across the board, and potentially a change in the composition of the HE student body. 4.2 Regression results Table 3 gives the marginal effects from a probit model, where the dependent variable takes a value of one if the person was in HE at age 18 and zero if they were not.21 Individuals not participating in HE could be in various states, either in or out of the labour market, or studying for lower level qualifications. Data from the four cohorts are pooled. The sample for our main table, as discussed above, is restricted to those who we believe are potentially able and qualified to go on to HE, i.e., those with five or more Grade A*-C GCSEs. In addition to addressing some of the attrition issues discussed in the previous section, restricting our sample to those with five or more good GCSEs would appear to be appropriate since increasingly individuals enter HE without A levels and hence limiting the sample to only those with A levels is too restrictive. However, our results remain qualitatively similar even when the comparator group is all individuals with one or more A levels. In specifications 1–3 in Table 3, dummy variables are included which indicate which cohort the individual is from, with the base case being cohort 6 (age 18 in 1994). The cohort dummy variables measure the average difference in HE participation between earlier and later cohorts, conditional on other personal characteristics of the students. Our other explanatory variables as discussed earlier are the student’s gender, socio-economic background, ethnicity, 21 Thus, we are measuring HE participation not degree attainment. If drop out and degree failure vary by social class, we may be understating the extent of the relationship between social class and HE achievement.
90
O. Marcenaro-Gutierrez et al.
Table 3 The determinants of HE participation (marginal effects) sample with 5+ good GCSEs: dependent variable degree versus other activities
Sex (male = 1) SEG: base case skilled manual SEG professional/managerial
(1)
(2)
(3)
(4)
(5)
(6)
0.018 (2.61)
0.039 (5.29)
0.036 (4.74)
0.018 (2.52)
0.039 (5.22)
0.036 (4.71)
0.057 (3.19) 0.043 (2.25) −0.057 (2.00) −0.063 (1.07) −0.070 (2.08)
0.024 (1.30) 0.022 (1.11) −0.020 (0.67) −0.058 (0.93) −0.027 (0.77)
0.002 (0.11) 0.007 (0.34) −0.026 (0.85) −0.026 (0.40) −0.013 (0.34)
−0.016 (0.86) −0.059 (3.27) −0.048 (2.46)
−0.017 (0.91) −0.063 (3.35) −0.043 (2.13)
0.002 (0.08) −0.060 (3.05) −0.067 (3.24)
0.050 (1.32) 0.167 (10.59) −0.046 (1.28) 0.012 (0.18)
0.092 (2.32) 0.191 (11.69) −0.042 (1.14) −0.012 (0.17)
0.090 (2.21) 0.193 (11.48) −0.038 (0.98) 0.017 (0.22)
0.091 (8.92) 0.035 (2.82) −0.005 (0.37) 0.061 (5.56) 0.049 (4.55) −0.061 (4.39)
0.042 (4.01) 0.014 (1.07) 0.002 (0.16) 0.016 (1.37) 0.019 (1.70) −0.052 (3.61)
0.032 (2.94) 0.012 (0.94) 0.012 (0.80) −0.003 (0.28) 0.009 (0.82) −0.043 (2.93)
−0.039 (4.71) 0.091 (6.73) −0.127 (5.14) 0.089 (8.11) 0.271 (1.77)
−0.037 (4.38) −0.011 (0.82) −0.069 (2.67) 0.001 (0.05) 0.184 (1.15)
−0.032 (3.62) −0.029 (2.01) −0.033 (1.25) 0.012 (1.05) 0.271 (1.69)
0.058 0.030 0.011 (6.07) (3.01) (1.04) SEG Other non-manual 0.060 0.034 0.017 (6.13) (3.31) (1.59) SEG Semi-skilled manual −0.009 0.001 −0.015 (0.64) (0.04) (0.99) SEG unskilled −0.014 0.005 0.019 (0.52) (0.19) (0.66) Miscellaneous −0.028 −0.001 −0.005 (1.76) (0.08) (0.30) Cohort dummies: base case cohort 6 Cohort 7: age 18 in 1996 −0.024 −0.026 −0.000 (2.38) (2.53) (0.00) Cohort 8: age 18 in 1998 −0.028 −0.032 −0.044 (2.94) (3.24) (4.30) Cohort 9: age 18 in 2000 −0.022 −0.029 −0.051 (2.15) (2.70) (4.63) Ethnicity: base case white Black 0.051 0.093 0.091 (1.34) (2.35) (2.23) Asian 0.167 0.191 0.193 (10.54) (11.67) (11.47) Other −0.046 −0.042 −0.036 (1.27) (1.12) (0.95) Ethnicity missing 0.019 −0.010 0.015 (0.27) (0.14) (0.20) Parental education: base case father/mother with less than A levels Father degree 0.094 0.044 0.033 (9.23) (4.21) (3.01) Father A level 0.037 0.015 0.013 (3.03) (1.20) (1.01) Father education missing −0.003 0.003 0.011 (0.21) (0.24) (0.75) Mother degree 0.063 0.016 −0.003 (5.66) (1.42) (0.24) Mother A level 0.049 0.019 0.009 (4.57) (1.69) (0.81) Mother education missing −0.060 −0.051 −0.043 (4.32) (3.57) (2.91) Type of school attended: base case comprehensive to age 18 Age 16 comprehensive −0.039 −0.037 −0.032 (4.70) (4.37) (3.62) Grammar school 0.090 −0.012 −0.029 (6.69) (0.85) (2.02) Secondary modern −0.127 −0.069 −0.034 (5.15) (2.68) (1.27) Independent 0.089 0.001 0.012 (8.07) (0.05) (1.02) School type missing 0.270 0.181 0.269 (1.76) (1.13) (1.68)
Who actually goes to university?
91
Table 3 continued (1) GCSE grades GCSE maths grade GCSE math grade missing flag GCSE English grade GCSE English grade missing flag
(2)
(3)
0.119 (28.29) 0.279 (8.87) 0.094 (19.36) 0.008 (0.19)
0.082 (18.67) 0.186 (5.45) 0.048 (9.48) −0.075 (1.71)
Highest school qualifications One or two A levels
(5)
(6)
0.119 (28.26) 0.281 (8.91) 0.094 (19.30) 0.007 (0.17)
0.082 (18.64) 0.186 (5.44) 0.048 (9.47) −0.076 (1.73)
0.072 (6.72) 0.361 (41.73)
Three or more A levels Interaction SEG with cohort 7 Professional × cohort 7 Non-manual × cohort 7 Semi-skilled × cohort 7 Unskilled × cohort 7 Miscellaneous × cohort 7 Interaction SEG with cohort 8 Professional × cohort 8 Non-manual × cohort 8 Semi-skilled × cohort 8 Unskilled × cohort 8 Miscellaneous × cohort 8 Interaction SEG with cohort 9 Professional × cohort 9 Non-manual × cohort 9 Semi-skilled × cohort 9 Unskilled × cohort 9 Miscellaneous × cohort 9 Observations
(4)
21,600
21,600
21,600
0.072 (6.70) 0.361 (41.65) −0.040 (1.53) −0.023 (0.83) 0.050 (1.27) 0.109 (1.39) 0.031 (0.72)
−0.024 (0.89) −0.022 (0.76) 0.016 (0.39) 0.107 (1.29) −0.007 (0.15)
−0.001 (0.03) −0.011 (0.36) 0.013 (0.30) 0.081 (0.93) −0.024 (0.51)
0.019 (0.80) 0.061 (2.36) 0.075 (1.98) 0.040 (0.54) 0.051 (1.16)
0.030 (1.20) 0.051 (1.91) 0.041 (1.03) 0.077 (0.99) 0.063 (1.37)
0.012 (0.47) 0.031 (1.12) 0.018 (0.44) 0.052 (0.64) 0.036 (0.76)
0.028 (1.07) 0.025 (0.87) 0.065 (1.54) 0.053 (0.63) 0.107 (2.05) 21,600
0.018 (0.67) 0.012 (0.40) 0.025 (0.56) 0.059 (0.66) 0.060 (1.11) 21,600
0.030 (1.05) 0.015 (0.50) 0.015 (0.34) 0.033 (0.36) 0.028 (0.50) 21,600
Sample restricted to students from YCS cohorts 6, 7, 8 and 9 who attained at least 5 GCSE grades A–C by age 18. Base case is an individual with skilled manual background, white, parental education below A level, who attended a comprehensive school that accommodated students to age 18 and from cohort 6, i.e., age 18 in 1996. Estimated by probit
92
O. Marcenaro-Gutierrez et al.
parental education and school type. Specification 1 in Table 3 does not control for age 16/18 achievement (beyond the restriction of the sample to those with five good GCSEs). This model therefore measures the marginal association between these family background and school measures and HE participation, conditional on a minimum level of GCSE attainment. In specification 2 we add in achievement at age 16, measured by GCSE grades in English and mathematics.22 In specification 3 we add in achievement at age 18, as measured by the number of A levels the student attained. Despite the potential endogeneity of these achievement measures (as discussed above), we can still use these specifications to identify whether there remains any marginal association between our explanatory variables and HE participation, once account has been taken of age 16 and age 18 achievement levels. In columns 4–6, we then include the interactions between the cohort dummy variables and the variables measuring the individual’s socio-economic background.23 These interaction terms enable us to test the hypothesis that the marginal association between socio-economic background and HE participation has been changing over the period. We are most interested in the gap in HE participation between students of higher (lower) socio-economic backgrounds. We want to determine whether these gaps have changed across the different cohorts. The coefficients on the cohort × socio-economic group interaction terms indicate whether the particular group in question has an increased/decreased probability of HE participation, over and above the average gap in HE participation for that socio-economic group across all cohorts. Turning to our findings, we start by discussing the average relationship between socio-economic background and HE participation across all cohorts. In specification 1, which does not control for cohort/socio-economic group interactions, nor for GCSE grades or A level attainment, we find a strong and significant relationship between socio-economic background and HE participation. Young people from a professional, managerial or non-manual family background had a 6 percentage point higher probability of HE participation than the base case of a student from a skilled manual background. The other socio-economic group variables are insignificant. Recall that this is a marginal association, controlling for other factors that are also influenced by socio-economic background such as school type. Specifications 2 and 3 in Table 3 then add controls for achievement at both GCSE (specification 2) and A level (specification 3). Inclusion of GCSE achievement reduces but does not eliminate the significant association between socio-economic background and HE participation. However, once one includes a measure of educational achievement at A level, we find that the socio-economic group variables become insignificant.
22 Coded as Grade A – five points, Grade B – four points, Grade C – three points, Grade D – two points, Grade E – one point. Otherwise zero. 23 We tested for full interactions between all the explanatory variables and the cohort dummies but as the other interactions are not significant they were not included.
Who actually goes to university?
93
Specifications 1–3 constrain the coefficients on the socio-economic group variables to be the same across all the cohorts and only controls for a mean cohort effect. The mean cohort effect in specification 1 suggests that individuals from the later cohorts were between 2 and 3 percentage points less likely to participate in HE, as compared to those from the 1994 cohort. However, when controls for age 16/18 achievement are included in the model (specifications 2 and 3), the mean cohort effect suggests that pupils from the post fee cohorts (1998 and 2000) are nearly 5 percentage points less likely to go to university than pupils from the earlier cohorts (1994 and 1996). In other words, in a model which accounts for changes in GCSE and A level achievement, later cohorts are significantly less likely to go to university. In summary, the first three specifications from Table 3 suggest that, for a given level of achievement at 16 and 18, there is no significant marginal relationship between socio-economic background and HE participation. As we discuss in detail in our conclusions, this is an important finding from a policy perspective, implying that the issue of educational inequality in HE participation is rooted in the decisions made and achievement of pupils in secondary school, rather than at the point of entry into HE.
4.3 Cohort interactions As has been said, the first three specifications in Table 3 constrain the impact of the family background variables to be the same across all the cohorts. In specifications 4–6 we check this assumption by including a full set of cohort × socioeconomic background interactions. In specification 4, which does not control for GCSE grades or A level attainment, we continue to find significant average relationships between socioeconomic background and the likelihood of HE participation. Among students who achieved the minimum of 5 good GCSEs, those from professional backgrounds were 6 percentage points more likely to subsequently participate in HE. Similarly, those from non-manual backgrounds were 4 percentage points more likely to go to university. Once one controls for cohort × socio-economic group interactions, individuals from semi-skilled backgrounds were nearly 6 percentage points less likely to go to university than those from skilled backgrounds, whilst those from unclassified socio-economic backgrounds were 7 percentage points less likely to go to university. The cohort dummy variables in specification 4 suggest that individuals from the 1998 and 2000 cohorts are between 5–6 percentage points less likely to go to university. Once again this suggests that after controlling for personal characteristics, we find that later cohorts are significantly less likely to go to university. The cohort × socio-economic group interaction terms in specification 4 are largely insignificant. There is evidence that pupils from cohort 8 (age 18 in 1998) with a non-manual background were 6 percentage points more likely to go to university, as compared to those from a skilled background who turned aged eighteen in 1994. If we focus on the HE participation gap between pupils from
94
O. Marcenaro-Gutierrez et al.
non-manual and skilled backgrounds, we find this gap to be 10.4 percentage points in 1998,24 compared to just 4.3 percentage points in 1994. This pattern is not significant for the 2000 cohort however. Equally we find that pupils from a semi skilled background were actually 7.5 percentage points more likely to participate in HE in 1998 (cohort 8). This means that the gap in HE participation between pupils from a semi skilled background and a skilled background was actually 1.8 percentage points higher for that cohort.25 This is somewhat counterintuitive, implying individuals from a lower socio-economic background in 1998 were actually more likely to go to university than those from a skilled manual background. By and large however, there is little evidence of a widening socio-economic gap in HE participation over the full period. Specification 5 then controls for GCSE achievement, whilst specification 6 controls for A level achievement. Controlling for age 16/18 achievement removes any mean association between socio-economic background and HE participation. Controlling for age 16/18 achievement does not however have much impact on the mean cohort effects. The results still suggest that pupils from the 1998 and 2000 cohorts are significantly less likely to go to university than similarly qualified students from earlier cohorts. The socio-economic group – cohort interaction terms remain insignificant, indicating no change in the relationship between socio-economic background and HE participation, at least not once one controls for prior achievement. In summary, there is a significant relationship between socio-economic background and HE participation when one controls only for a minimal level of GCSE achievement. In the 1990s, students from more advantaged backgrounds who got five good GCSEs were significantly more likely to go on to university than similarly qualified students from lower socio-economic backgrounds. However, once one accounts for achievement at GCSE and particularly at A level, the relationship between socio-economic background and HE participation becomes insignificant. Of course socio-economic background impacts on GCSE and A level achievement too. It is certainly the case that socio-economic background affects students’ educational attainment; however, we do not observe a significant relationship between socio-economic background and educational achievement in the A level to HE phase.26 Secondly, we find that individuals from the 1998 and 2000 (post-fee) cohorts are significantly less likely to go to university for a given set of personal char24 One must add the coefficient on the non-manual socio-economic group dummy to the coefficient on the non-manual × cohort 8 interaction term (4.3 + 6.1 = 10.4). 25 One must add the coefficient on the semi skilled manual socio-economic group dummy to the coefficient on the semi-skilled × cohort 8 interaction term (−5.7 + 7.5 = 1.8). 26 Appendix A Table 6 shows the raw relationship between socio-economic background and the likelihood of attending HE, for the full YCS sample. Specifications 1 and 4 have no controls for educational attainment at age 16 or 18 and show strongly significant effects from socio-economic background on HE participation. The table based on the full sample also confirms that once one allows for achievement at age 16/18, there is no significant relationship between socio-economic background and HE participation. In general terms the appendix shows that most of the relationships observed in the restricted sample also hold in the full sample.
Who actually goes to university?
95
acteristics. However, aggregate HE participation did rise somewhat during this period and as Appendix A Table 6 shows (specification 1), pupils from the later cohorts are indeed significantly more likely to go to university when one does not control for GCSE or A level achievement at all. This apparent contradiction is explained by the fact that GCSE and A level achievement was also rising, so on average potential HE entrants were becoming better qualified. The model actually tests whether, for a given level of achievement at age 16 and 18, individuals from later cohorts were more or less likely to go to university. The results suggest that students who achieved a similar level of achievement at age 16/18 were actually less likely to go to university in the later cohorts. This is consistent with the continuing increase in HE participation during this period being driven by pupils achieving more at GCSE and A level and hence being more likely to go to university, rather than any tendency for the HE participation rate to rise for a given level of attainment. In other words this provides some evidence to counter the ‘dumbing down’ story, namely that the rise in HE participation has been driven by falling entry standards. It does not provide evidence of any changes in academic standards at GCSE or A level. Furthermore, as HE expanded during the 1980s and early 1990s, increasingly individuals started to enter HE without A levels. This in itself may provide some support for the ‘dumbing down’ hypothesis in the earlier period. Thirdly, we find weak evidence of a change in the relationship between socioeconomic background and HE participation but only for certain socio-economic groups in the 1998 cohort. Clearly the fact that we only find evidence of change for one cohort indicates that such analyses need to be conducted over a reasonably long time period (as is the case here) to identify trends. In any case, these interaction effects disappear once one allows for age 16/18 achievement. 4.4 Other family background variables We now turn to the other family background and school variables. For the sample of higher achievers in Table 3, i.e., those with five or more good GCSEs, males are more likely to participate in HE. In the specifications that control for age 16/18 achievement, this effect is larger. This confirms that for a given level of achievement at age 16/18, boys are still more likely to go to university. The coefficients on the ethnicity variables are generally insignificant.27 However, we do observe that Asian students are significantly more likely (by 17–19 percentage points) to go to university. This result is extremely robust to specification and holds in the full sample (Appendix A: Table 6). 28 There is a large literature on the impact of ethnicity on educational attainment (see Bhattacharyya et al. 2003 for a summary). Our results are broadly consistent with the findings from this literature, namely that whilst ethnicity has a significant impact on 27 Ideally we would have liked finer distinctions between different ethnic groups, rather than the somewhat crude indicators that we use. However, this was not possible with these data. 28 We also included ethnicity cohort interactions but they were insignificant.
96
O. Marcenaro-Gutierrez et al.
educational achievement, much of this is due to the role of socio-economic factors. In models that do not control for socio-economic background, ethnicity plays a significant role in determining educational achievement. In our model, which includes socio-economic group controls, ethnicity only has a significant (positive) effect for students classified as Asian. Parental education, like socio-economic background, is significantly associated with HE participation. In specification 1, individuals whose mother or father has a degree are 6–9 percentage points more likely to go to HE. Those with a mother or father with at least one A level are 4–5 percentage points more likely to participate in HE.29 Yet once one controls for attainment at age 16 and 18, the effect lessens and becomes insignificant in specification 3, except for pupils who have a degree educated father who are still 3 percentage points more likely to go to university ceteris paribus. The main result is that parental education matters but again the marginal association between parental education and pupils’ own educational achievement is largely insignificant in the A level-HE phase.30 We also included school type, to proxy for an individual’s prior school quality and to some extent pupil ability (more able pupils attend grammars). The base case for Table 3 is an individual who attended a comprehensive that took students up to the age of 18. We then examine the relationship between school type and HE participation, where the different school types are; comprehensives that only took students up to the age of 16, grammars, secondary moderns and independent schools. In specification 1, it is clear that students who attended a comprehensive that only went to the age of 16 are significantly less likely to go to university (by around 4 percentage points). Students who were in grammar schools and independent schools are significantly more likely to go to university (by 9 percentage points). Students from secondary moderns are very much less likely (by 13 percentage points) to go to university. This specification does not however, control for achievement at age 16–18 which is highly correlated with school type. In specification 3, after controlling for achievement at both GCSE and A level, an individual who attended an age 16 comprehensive school is still 3 percentage points less likely to go to university. This might suggest that curriculum options, career options, career advice or expectations are different for students attending these schools, even for a given level of pupil ability. Interestingly grammar students are actually 3 percentage points less likely to go to university, for a given level of GCSE and A level attainment.31 Of course our variables measuring achievement at age 16 and 18, namely the number of A levels and GCSE maths and English grade, are highly significant. Higher achieving students are more likely to go to university, for a given set of family background characteristics. 29 Unsurprisingly the relationship between parental education and HE participation is much stronger in the full sample when no account is taken of GCSE or A level achievement (Appendix A: Table 6, specification 1). 30 We also included parental education cohort interactions but they were insignificant. 31 We also included school type cohort interactions but they were insignificant.
Who actually goes to university?
97
Lastly, we note that our results are robust to the inclusion of regional fixed effects. Our sample size is not sufficient to estimate the model by region but we are aware of recent evidence of regional differences in the determinants of education participation (Rice 2004).
5 Conclusions There is a highly significant relationship between a pupil’s socio-economic background and the likelihood of she/he participating in HE. However, we found no evidence of any marginal effect from socio-economic background for a given level of age 16/18 achievement. Thus there is certainly socio-economic inequality in HE but this phenomenon is largely as a result of inequalities and decisions made earlier in the education system, i.e., before the age of 16/18. Specifically in models that include finer measures of educational achievement at ages 16 and 18, the relationship between socio-economic background and indeed parental education and HE participation becomes statistically insignificant. If policymakers wish to reduce socio-economic inequalities in HE, they need to focus first on the problems of educational inequality that emerge in the compulsory schooling phase. This evidence is consistent with other analyses, which suggest that education inequalities emerge early. For example, Bradley and Taylor (2000) found that ethnic differences in educational attainment at age 16 were also largely determined by prior attainment. Of course just because we observe significant inequalities in educational attainment at earlier ages, does not mean this inequality is unrelated to problems in HE. Students may look forward and anticipate barriers to participation in HE and make less effort in school as a result. Indeed there are many such potential barriers, not least of which is the expected cost of HE and the role of student expectations (see for example Connor et al. 2001; Jackson et al. 2004). Thus poorer students may put in less effort at school, particularly at GCSE, simply because they do not anticipate being able to access HE anyway. The role played by students’ perceptions about the barriers they face in HE is an area that requires further research. Nonetheless, given that both GCSEs and A levels earn a significant return in the labour market in and of themselves, it is unlikely that students’ expectations about not being able to go to university completely explain their achievement at age 16 or 18. It is more likely that students who experience a lower quality educational experience up to the age of 16, are less likely to go to university simply because they lack the necessary educational grounding and qualifications to do so. This paper also examines the changing role of family background during the period. Although this is an important period, which saw the introduction of tuition fees, the numerous other policy changes occurring in the HE sector at that time mean that we cannot evaluate the impact of tuition fees per se. Instead we investigated changes in the socio-economic characteristics of those who went to university during this period. We found weak evidence of a change
98
O. Marcenaro-Gutierrez et al.
in the relationship between socio-economic background and HE participation for the 1998 cohort. However, over the full period of 1994–2000 there is no consistent evidence of a widening in the HE participation gap between higher and lower socio-economic groups. We therefore conclude that for students who achieve a minimum level at GCSE and A level, there is no evidence of a significant strengthening of the relationship between family background and HE participation during this time. We did find some evidence that the expansion of HE in the latter half of the 1990s was largely driven by increases in age 16 and age 18 achievement, rather than an opening up of HE to those who lacked these qualifications. GCSE and A level achievement was rising during this period, so on average students were becoming better qualified. In fact we found that students who achieved a similar level of achievement at age 16/18 were actually less likely to go to university in the later cohorts. This suggests that the continuing increase in HE participation during this period is driven by pupils achieving more at GCSE and A level and hence being more likely to go to university, rather than any tendency to ‘dumb down’ standards and admit lesser qualified students. Of course it may be that standards at 16 and 18 changed during this period and, as discussed above, that any ‘dumbing down’ occurred earlier during the massive expansion of HE in the early 1990s. Lastly, we found a significant relationship between the type of secondary school a pupil attended and their likelihood of going to university, even allowing for the full range of personal characteristics, socio-economic background and academic achievement at age 16/18. Specifically, if a pupil attended a comprehensive without a sixth form, she/he were 3 percentage points less likely to go to university. This might suggest that curriculum options or expectations are different for students attending these schools, and is an issue that merits further research. Interestingly grammar students are actually less likely to go to university, for a given level of GCSE and A level attainment. We have no hard evidence to explain this result although it may be that grammar school students have greater outside opportunities in the labour market that encourage them to leave full time education. It may also be that grammar school students are more likely to take time out (gap years and the like) before going on to university and that we are unable to observe this in our data. Acknowledgements We are grateful for very helpful comments from two anonymous referees, as well as participants at the CEE seminar on this issue.
Appendix A Table 4 Response rates for Youth Cohort Study surveys Cohort
6 (1994)
7 (1996)
8 (1998)
9(2000)
Response Rate (%)
69
66
65
65
Who actually goes to university?
99
Table 5 Characteristics of sample for sweeps 1 and 2 for cohort 7
Male Parents’ socio-economic status Professional, managerial and technical occupation Other non-manual occupations Skilled occupations – manual Semi-skilled occupations – manual Unskilled occupations Other Ethnicity White Black Asian Other Parental education Father degree Father at least one A level Father below one A level Mother degree Mother at least one A level Mother below one A level Highest school qualification Five or more A-C GCSEs Observations
Cohort 7 sweep 1 sample restricted to those with 5+ good GCSEs
Cohort 7 sweep 2 sample restricted to those with 5+ good GCSEs
0.43
0.40
0.27 0.20 0.32 0.09 0.02 0.10
0.27 0.20 0.33 0.09 0.02 0.09
0.92 0.01 0.06 0.01
0.93 0.01 0.05 0.01
0.22 0.10 0.68 0.14 0.15 0.71
0.23 0.10 0.67 0.14 0.16 0.70
1 9,319
1 4,883
∗ Base case in regressions
Table 6 The determinants of HE participation (marginal effects) unrestricted sample: dependent variable degree versus other activities
Sex (male = 1) SEG: base case skilled manual SEG professional/managerial
(1)
(2)
(3)
(4)
(5)
(6)
−0.005 (1.01)
0.036 (6.66)
0.033 (6.00)
−0.006 (1.16)
0.035 (6.58)
0.033 (5.96)
0.030 (4.06) 0.031 (4.15) −0.004 (0.40) −0.016 (0.87) −0.010 (0.92)
0.012 (1.66) 0.016 (2.08) −0.013 (1.27) −0.008 (0.44) −0.012 (1.07)
0.140 (10.26) 0.111 (7.80) −0.050 (2.72) −0.109 (3.15) −0.090 (4.41)
0.041 (3.09) 0.036 (2.55) −0.005 (0.25) −0.069 (1.79) −0.017 (0.75)
0.022 (1.60) 0.022 (1.54) −0.010 (0.47) −0.053 (1.31) −0.008 (0.36)
−0.008 (1.09) −0.012 (1.76) 0.001 (0.14)
0.014 (1.86) −0.019 (2.63) −0.017 (2.18)
0.045 (3.42) 0.008 (0.61) 0.052 (3.58)
0.013 (0.97) −0.027 (2.06) 0.008 (0.59)
0.029 (2.09) −0.022 (1.61) −0.012 (0.86)
0.099 (13.20) SEG other non-manual 0.089 (11.67) SEG semi-skilled manual −0.029 (3.00) SEG unskilled −0.063 (3.74) miscellaneous −0.061 (5.87) Cohort dummies: base case cohort 6 Cohort 7: age 18 in 1996 0.008 (1.08) Cohort 8: age 18 in 1998 0.015 (2.10) Cohort 9: age 18 in 2000 0.038 (4.77)
100
O. Marcenaro-Gutierrez et al.
Table 6 continued (1) Ethnicity: base case white Black
(2)
(3)
−0.030 0.043 0.044 (1.24) (1.62) (1.60) Asian 0.129 0.177 0.176 (10.89) (13.93) (13.52) Other −0.014 0.000 0.002 (0.51) (0.01) (0.07) Ethnicity missing −0.040 −0.036 −0.023 (0.85) (0.73) (0.45) Parental education: base case father/mother with less than A levels Father degree 0.109 0.029 0.020 (13.18) (3.64) (2.45) Father A level 0.057 0.016 0.013 (5.76) (1.66) (1.31) Father education missing −0.039 −0.008 −0.001 (4.12) (0.78) (0.13) Mother degree 0.097 0.014 −0.003 (10.52) (1.59) (0.39) Mother A level 0.076 0.016 0.008 (8.73) (1.98) (1.00) Mother education missing −0.067 −0.042 −0.036 (7.01) (4.27) (3.60) Type of school attended: base case comprehensive to age 18 Age 16 comprehensive −0.038 −0.027 −0.022 (6.32) (4.55) (3.58) Grammar school 0.204 −0.007 −0.023 (16.41) (0.68) (2.17) Secondary modern −0.133 −0.045 −0.024 (8.81) (2.78) (1.37) Independent 0.177 0.004 0.013 (18.01) (0.45) (1.43) School type missing 0.315 0.112 0.205 (2.26) (0.88) (1.48) GCSE grades GCSE maths grade 0.117 0.083 (41.92) (28.65) GCSE math grade missing flag 0.210 0.115 (9.90) (5.51) GCSE English grade 0.106 0.066 (32.75) (19.60) GCSE English grade missing flag 0.142 0.062 (5.13) (2.29) Highest school qualifications One or two A levels 0.133 (15.70) Three or more A levels 0.360 (49.00) Interaction SEG with cohort 7 Professional × cohort 7 Non-manual × cohort 7 Semi-skilled × cohort 7
(4)
(5)
(6)
−0.031 (1.27) 0.130 (10.97) −0.013 (0.47) −0.049 (1.05)
0.042 (1.59) 0.177 (13.92) 0.000 (0.01) −0.037 (0.75)
0.043 (1.57) 0.175 (13.50) 0.001 (0.05) −0.021 (0.42)
0.106 (12.70) 0.055 (5.51) −0.042 (4.42) 0.096 (10.43) 0.076 (8.71) −0.068 (7.03)
0.027 (3.42) 0.015 (1.53) −0.009 (0.93) 0.014 (1.57) 0.017 (2.03) −0.042 (4.29)
0.019 (2.37) 0.012 (1.25) −0.001 (0.13) −0.003 (0.39) 0.009 (1.04) −0.036 (3.59)
−0.038 (6.36) 0.204 (16.43) −0.134 (8.84) 0.177 (18.06) 0.316 (2.26)
−0.027 (4.56) −0.007 (0.66) −0.045 (2.79) 0.004 (0.44) 0.116 (0.91)
−0.022 (3.57) −0.023 (2.15) −0.023 (1.36) 0.013 (1.44) 0.208 (1.50)
0.117 (41.91) 0.212 (9.97) 0.105 (32.65) 0.140 (5.06)
0.083 (28.63) 0.116 (5.52) 0.066 (19.58) 0.061 (2.27) 0.132 (15.67) 0.360 (48.89)
−0.091 (5.10) −0.073 (3.82) 0.015 (0.54)
−0.041 (2.23) −0.032 (1.65) −0.015 (0.53)
−0.022 (1.18) −0.022 (1.10) −0.014 (0.51)
Who actually goes to university?
101
Table 6 continued (1)
(2)
(3)
Unskilled × cohort 7 Miscellaneous × cohort 7 Interaction SEG with cohort 8 Professional × cohort 8 Non-manual × cohort 8 Semi-skilled × cohort 8 Unskilled × cohort 8 Miscellaneous × cohort 8 Interaction SEG with cohort 9 Professional × cohort 9 Non-manual × cohort 9 Semi-skilled × cohort 9 Unskilled × cohort 9 Miscellaneous × cohort 9 Observations
33,590
33,590
33,590
(4)
(5)
(6)
0.117 (2.13) 0.037 (1.26)
0.101 (1.67) −0.021 (0.71)
0.083 (1.36) −0.031 (1.03)
−0.022 (1.22) 0.015 (0.76) 0.047 (1.72) 0.073 (1.40) 0.040 (1.32)
0.010 (0.57) 0.024 (1.24) 0.011 (0.40) 0.085 (1.47) 0.037 (1.18)
−0.005 (0.28) 0.007 (0.36) −0.002 (0.07) 0.064 (1.10) 0.021 (0.67)
−0.039 (2.01) −0.034 (1.63) 0.031 (1.03) 0.019 (0.35) 0.060 (1.65) 33,590
−0.017 (0.89) −0.017 (0.80) 0.011 (0.35) 0.049 (0.78) 0.018 (0.48) 33,590
−0.008 (0.40) −0.013 (0.62) 0.006 (0.19) 0.040 (0.62) −0.001 (0.02) 33,590
Sample: students from YCS cohorts 6, 7, 8 and 9. Base case is an individual with skilled manual background, white, parental education below A level, who attended a comprehensive school that accommodated students to age 18 and from cohort 6, i.e. age 18 in 1996. Estimated by probit
References Barr N, Crawford I (1998) The dearing report and the government response: a critique. Polit Q 69(1):72–84 Barr N (2002) Funding higher education: policies for access and quality. House of Commons Education and Skills Committee, Post-16 student support, Session 2001-02, 24 April 2002 Behrman J (1997) Mother’s schooling and child education: a survey. University of Pennsylvania, Department of Economics, discussion paper 025, manuscript Bhattacharyya G, Ison L, Blair M (2003) Minority ethnic attainment and participation in education and training: the evidence. Department for Education and Skills Research Report 01–03 Blanden J, Machin S (2003) Educational inequality and the expansion of UK higher education. Centre for Economic Performance mimeo Blanden J, Gregg P, Machin S (2003) Changes in educational inequality. Centre for the Economics of Education, discussion paper (forthcoming) Blanden J, Goodman A, Gregg P, Machin S (2002) Changes in Intergenerational Mobility in Britain. Centre for the Economics of Education, discussion paper no. 26, London School of Economics. In: Corak M (ed.) Generational income mobility in North America and Europe. Cambridge University Press, Cambridge (forthcoming) Bradley S, Taylor J (2004) Ethnicity, educational attainment and the transition from school. Manchester School 72(3):317–346
102
O. Marcenaro-Gutierrez et al.
Breen R, Goldthorpe J (1997) Explaining educational differentials – towards a formal rational action theory. Ration Soc 9:275–305 Breen R, Goldthorpe J (1999) Class inequality and meritocracy: a critique of saunders and an alternative analysis. Br J Sociol 50(1):1–27 Brooks-Gunn J, Duncan GJ, Mariato N (1997) Poor families, poor outcomes: the well-being of children and youth. In: Duncan GJ and Brooks-Gunn J (eds) Consequences of growing up poor. Russell Sage Foundation, New York, pp 1–17 Buchardt T (2004) Aiming high: the educational and occupational aspirations of young disabled people. Support Learn 19(4):181–186 Callender C (2003) Student financial support in higher education: access and exclusion. In: Tight M (ed) Access and exclusion: international perspectives on higher education research. Elsevier, London, pp 127–158 Cameron S, Heckman J (2001) The dynamics of educational attainment of black, Hispanic and white males. J Polit Econ 109:455–499 Carneiro P, Heckman J (2003) Human capital policy. In: Heckman J, Krueger A (eds.) Inequality in America: what role for human capital policies? MIT Press, Cambridge, pp 79–90 Chevalier A, Lanot G (2006) Financial transfer and educational achievement. Educ Econ 10(2):165– 182 Connor H, Dewson S, Tyers C, Eccles J, Aston J (2001) Social class and higher education: issues affecting decisions on participation by lower social class groups. Department for Education and Skills research report RR267 Cook M, Evans W (2000) Families or schools? Explaining the convergence in white and black academic performance. J Labor Econ 18(4):729–754 Currie J, Thomas D (1999) Early test scores, socioeconomic status and future outcomes. NBER working paper no. 6943. National Bureau of Economic Research, Cambridge Dearden L, Machin S, Reed H (1997) Intergenerational mobility in Britain. Econ J 107:47–64 Dearden L, McIntosh S, Myck M, Vignoles A (2002) The returns to academic, vocational and basic skills in Britain. Bull Econ Res 54(3):249–274 De Fraja G (2005) Reverse discrimination and efficiency in education. Int Econ Rev 46:1009–1031 De Fraja G (2002) The design of optimal education policies. Rev Econ Stud 69:437–466 Dolton P, Greenaway D, Vignoles A (1997) Whither higher education? An economic perspective for the Dearing Committee of Inquiry. Econ J 107(442):710–726 Dolton P, Makepeace G, Gannon B (2001) The earnings and employment effects of young people’s vocational training in Britain. Manchester School 69(4):387–417 Erickson R, Goldthorpe J (1992) The constant flux: a study of class mobility in industrial societies. Oxford University Press, Oxford Feinstein L, Symons J (1999) Attainment in secondary school. Oxford Econ Pap 51:300–321 Feinstein L, Duckworth K, Sabates R (2004) A model of the inter-generational transmission of educational success. Wider benefits of learning research report no. 10, Bedford Group, Institute of Education, London Fryer R, Levitt S (2005) The black–white test score gap through third grade. NBER working paper 11049, manuscript Galindo-Rueda F, Vignoles A (2005) The declining relative importance of ability in predicting educational attainment. J Hum Resources 40(2):335–353 Galindo-Rueda F, Marcenaro-Gutierrez O, Vignoles A (2004) The widening socio-economic gap in UK higher education. Natl Inst Econ Rev 190:70–82 Gayle V, Berridge D, Davies R (2000) Young people’s routes to higher education: exploring social processes with longitudinal data. Higher Educ Rev 33:47–64 Gayle V, Berridge D, Davies R (2002) Young people’s entry to higher education: quantifying influential factors. Oxford Rev Educ 28:5–20 Gayle V, Berridge D, Davies R (2003) Econometric analyses of the demand for higher education. Department for Education and Skills research report RR471 Gibbons S (2001) Paying for good neighbours? Neighbourhood deprivation and the community benefits of education. Centre for the Economics of Education, discussion paper no. 17, London School of Economics Glennerster H (2001) United kingdom education 1997–2001. Centre for the Analysis of Social Exclusion (CASE), paper 50
Who actually goes to university?
103
Goodman A (2004) Presentation to Department of Trade and Industry (DTI) Greenaway D and Haynes M (2003) Funding higher education in the UK: the role of fees and loans. Econ J 113(485):F150–F167 Haveman R, Wolfe B (1995) The determinants of children’s attainments: a review of methods and findings. J Econ Lit 33 (4):1829–1878 Howieson C, Croxford L (1996) Using the YCS to analyse the outcomes of careers education and guidance. The Stationary Office, DFEE research series, London Jackson M, Erickson R, Goldthorpe J, Yaish M (2004) Primary and secondary effects on class differentials in educational attainment: the transition to A level courses in England and Wales. Meeting of the ISA Committee on Social Stratification and Mobility, Neuchatel Lynn P (1996) England and Wales Youth Cohort Study: the effect of time between contacts, questionnaire length, personalisation and other factors on response to the YCS (research studies). The Stationery Office Books, London Machin S, Vignoles A (2004) Education inequality. Fiscal Stud 25(2):107–128 Machin S, Vignoles A (2005) What’s the good of education? The economics of education in the UK. Princeton University Press, Princeton Payne J (2001) Patterns of participation in full-time education after 16: an analysis of the England and Wales Youth Cohort Studies. Department for Education and Skills, Nottingham Payne J, Cheng Y, Witherspoon S (1996) Education and training for 16–18 year olds in England and Wales – individual paths and national trends. Policy Studies Institute, London Rice P (2004) Education and training post-16 – differences across the British regions. CEE conference Rice P (1999) The impact of local labour markets on investment in further education: evidence from the England and Wales youth cohort studies. J Popul Econ 12(2):287–231 Saunders P (1997) Social mobility in Britain: an empirical evaluation of two competing explanations. Sociology 31:261–288 Sianesi B (2003) Returns to education: a non-technical summary of CEE work and policy discussion. Draft report for Department for Education and Skills Schoon I, Bynner J, Joshi H, Parsons S, Wiggins RD, Sacker A (2002) The influence of context, timing, and duration of risk experiences for the passage from childhood to mid-adulthood. Child Dev 73(5):1486–1504 Woodhall M (ed.) (2002) Paying for learning: the debate on student fees, grants and loans in international perspective. Welsh J Educ 11(1):1–9
Does the early bird catch the worm? Instrumental variable estimates of early educational effects of age of school entry in Germany Patrick A. Puhani · Andrea M. Weber
Revised: 15 June 2006 / Published online: 26 August 2006 © Springer-Verlag 2006
Abstract We estimate the effect of age of school entry on educational outcomes using two different data sets for Germany, sampling pupils at the end of primary school and in the middle of secondary school. Results are obtained based on instrumental variable estimation exploiting the exogenous variation in month of birth. We find robust and significant positive effects on educational outcomes for pupils who enter school at 7 instead of 6 years of age: test scores at the end of primary school increase by about 0.40 standard deviations and the probability to attend the highest secondary schooling track (Gymnasium) increases by about 12% points. JEL classification I21 · I28 · J24 Keywords Education · Immigration · Policy · Identification
P. A. Puhani (B) University of Hannover, Institut für Arbeitsökonomik, Königsworther Platz 1, 30167 Hannover, Germany e-mail:
[email protected] P. A. Puhani SIAW, University of St. Gallen, St. Gallen, Switzerland P. A. Puhani IZA, Bonn, Germany A. M. Weber Darmstadt University of Technology, Fachbereich 1, Residenzschloss S313/136, Marktplatz 15, 64283 Darmstadt, Germany e-mail:
[email protected]
106
P. A. Puhani, A. M. Weber
1 Introduction The ideal age at which children should start school and the effectiveness of pre-school learning programs are subjects of ongoing debates among researchers and policy makers. For example, in the economic literature Currie (2001) summarises evidence on early childhood education. From a theoretical point of view skill formation can be modelled as a process characterised by multiple stages in which early investments are crucial for later investments (cf. Cunha et al. 2006, on life cycle skill formation). In the empirical literature, age of school entry effects are estimated in Angrist and Krueger (1992) and Mayer and Knutson (1999) for the United States, Leuven et al. (2004) for the Netherlands, Strøm (2004) for Norway, Bedard and Dhuey (2006) for a set of industrialised countries, Fertig and Kluve (2005) for Germany and Fredriksson and Öckert (2005) for Sweden. In Germany, as in most other European countries, children are traditionally supposed to start school when they are about 6 years old. A look back in history reveals that starting education at the ages 6 or 7 is not just a feature of the industrialised time. Already in Germany’s mediaeval predecessor, the Holy Roman Empire, the track to knighthood began at age 7 as a footboy (Page). In post-war Germany, the changing attitude towards school entry age has been driven by debates among educationalists. In the beginning of the 1950s, Kern (1951) hypothesised that a higher school entry age could prevent children from failing in school. Subsequently, the school entry age was increased by a total of 5 months in 1955 and in 1964. Since that time, there has also been a trend to have children with learning problems enter school 1 year later than recommended by the official school entry rule. In recent years, however, debates on the long duration of the German education system have taken early school entry back on the agenda. Policy makers in Germany’s decentralised education system have subsequently implemented measures to reduce the average age of school entry (see Sect 2). Therefore, it seems reasonable to ask whether such policies can be expected to improve educational attainment. In this article, we estimate the causal effect of varying the age of school entry in Germany between 6 and 7 years by an instrumental variable strategy using the exogenous variation of month of birth as an instrument for the age of school entry. The variation between ages 6 and 7 is both a major variation observed internationally for the school starting age and a major issue of discussion in the national German debates. Using two different data sets, we measure the effect of age of school entry at the end of primary school and in the middle of secondary school. Our outcome measures are a test score for primary school pupils and the school track attended, respectively. To the best of our knowledge, ours is the second study investigating the effect of age of school entry by instrumental variable estimation for Germany. We do not show results based on the same data as used in the previous study by Fertig and Kluve (2005) since we cast doubt on the quality of this data for our purposes (cf. the discussion paper version, Puhani and Weber 2005).
Does the early bird catch the worm?
107
The influence of school entry age on educational outcomes is a well-discussed topic, especially in the US and British empirical educationalist literature.1 However, these studies do not sufficiently account for the endogeneity of the age of school entry: in Germany, as well as in many other countries, school entry age is not only determined by some exogenous rule, but depends on the child’s intellectual or physical development or the parents’ will, too. In several countries (e.g. the US) some schools even use standardised tests in order to assess potential first graders’ or kindergartners’ school readiness. A key institutional difference between Germany on the one hand and the US or the UK on the other is that in Germany each child independently of date of birth has to complete at least 9 years of compulsory full-time schooling.2 In the US and the UK, length of mandatory schooling varies with date of birth, as children are allowed to leave school once they have reached a certain age (cf. Angrist and Krueger 1992, for the US and Del Bono and Galindo-Rueda 2004, for the UK).3 Hence, in these Anglo-Saxon countries compulsory schooling length is shorter for pupils having entered school at an older age. In Germany, however, all pupils at least have to wait until their ninth school year has finished before they may leave full-time education. Consequently, the German institutional setup allows identification of age of school entry effects independently of compulsory schooling, which is not possible in the US or the UK. A further feature that makes the German case interesting to examine is that the German education system is highly selective. Unlike in most other countries, the child’s performance in primary school is crucial for the educational career of a person because at the end of primary school (at age 10; primary school usually lasts for 4 years) children are selected into one of three educational tracks: the most academic is Gynmasium, usually consisting of 9 further years of schooling, followed by Realschule (6 years) and Hauptschule (5 years and 1 Stipek (2002) provides a thorough review of this literature. One type of existing studies considers the effects of academic red-shirting (i.e. the delay of school entry) and early grade retention (e.g. May et al. 1995; Jimerson et al. 1997; Zill et al. 1997; Graue and DiPerna 2000) or of early school admission of selected children (cf. Proctor et al. 1986, for a review). However, these studies do not appropriately take the endogeneity problem in measuring entry age effects into account and the mixed findings are therefore hard to interpret (cf. Stipek 2002; Angrist 2004). A second stream of literature examines the effect of entry age induced through season of birth on educational and social outcomes or mental development (e.g. Kinard and Reinherz 1986; Morrison et al. 1997; Hutchison and Sharp 1999; Stipek and Byler 2001). The results mostly indicate that there are no long-lasting effects while there is evidence of positive effects of a higher school entry age in the short run. Since outcomes are separately analysed by season of birth, which is taken as exogenous, the applied methods solve the endogeneity problem by producing reduced form estimates (without however explicitly discussing it). None of the mentioned studies uses an IV approach as in the recent economic literature. 2 The exact rule depends on the state. The 9 or 10 years of compulsory full-time education are followed by either at least 1 additional year of full-time education or by several years of part-time education in a vocational school (Berufsschule) within the German apprenticeship system. 3 To be more precise, in England and Wales children could traditionally (between 1962 and 1997)
leave school at the beginning of the Easter holiday in the school year in which they attained the relevant leaving age if they were born between September and the end of January. Children born between February and the end of August could not leave before the end of May.
108
P. A. Puhani, A. M. Weber
the most vocational track). As track selection is supposed to be based on the pupil’s primary school performance, the German track system may aggravate age of school entry effects by perpetuating inequalities arising at early stages of the education system (cf. Hanushek and Wößmann 2006). Hence, age of school entry may have larger and more lasting effects in Germany than in countries with a comprehensive school system. The article is structured as follows. Section 2 outlines age of school entry regulations for the cohorts we observe in our data and sketches main features of the German school system. The data sets we use are described in Sect 3. First, for primary school test scores we rely on the ‘Progress in International Reading Literacy Study’ of 2001 (PIRLS). Second, for the school track during secondary schooling we use newly available administrative data for the state of Hessen including all pupils in general education in the school year 2004/2005. Section 4 argues that our empirical approach to identify the effect of age of school entry on educational outcomes is justified. We show that the instrument is effectively uncorrelated with the observed variables used as regressors and that first-stage regressions do not exhibit a weak instrument problem. The estimation results are presented and discussed in Sect. 5. We find robust evidence that entering the current German school system at the age of 7 instead of 6 years raises primary school test scores by two fifths of a standard deviation and increases the probability to attend the highest school track (Gymnasium) by about 12% points. If we assume that the school track attended will be completed as we observe it in the data, the amount of secondary schooling is increased by almost half a year (about 5 months) on average by entering school 1 year older. Section 6 concludes and reports results from a small-size survey of headmasters and headmistresses, which we carried out in order to discuss potential explanations for our empirical estimates.
2 Age of school entry and the German education system In international comparison, the German compulsory school starting age of 6 years is equal to the median and mode of the distribution displayed in Table 1. Before the age of 6, German children usually attend kindergarten, which is a playgroup rather than a pre-school. Projects where children learn how to read and write in kindergarten are recent and rare. Therefore, entering primary school for a German child traditionally has meant moving from a playgroup to an educational regime of teaching from 8 o’clock in the morning to 12 o’clock in the afternoon with only short breaks (there is some variation on these times by state). Although the exact school entry age is regulated by law in Germany, personal and school discretion is high. The school laws (Schulgesetze) of the states (Länder) are traditionally based on the so-called Hamburg Accord (Hamburger Abkommen) which was in place in Western Germany between 1964 and 1997. The Hamburg Accord states that children whose sixth birthday is before the end of June of a given calendar year enter school at the beginning of the
109
Does the early bird catch the worm? Table 1 Compulsory school starting age by country Age 4
Age 5
Age 6
Age 7
Northern Ireland Netherlands (from 8/02)
Australia (Tasmania) England Malta Netherlands (until 8/02) New Zealand Scotland Wales
Austria Australia* Belgium Cyprus Czech Republic France Germany Greece Hong-Kong Hungary Iceland Republic of Ireland Italy Japan Korea Liechtenstein Lithuania Luxembourg Norway Portugal Slovakia Slovenia Spain Switzerland USA
Bulgaria Canada Denmark Estonia Finland Latvia Poland Romania Singapore Sweden Switzerland
Note: Based on information from 2002. *Except the state of Tasmania. In Switzerland entry age differs by region Sources: Sharp (2002) and Bertram and Pascal (2002)
corresponding school year (normally in August). Children born later are supposed to start school in the following calendar year (again around August).4 Deviation from the Hamburg Accord may be caused by parents and school principals considering a child (not) mature enough to start school at an early age. Traditionally, the school laws allow for such leeway. In practice, this yields a situation where children born between the official cut-off date ‘end of June’ and the school year starting date are often admitted to school in the calendar year when they turn 6 years of age. Formally, the Hamburg Accord with its June cut-off date is (by law) the relevant regulation in all German states during the time period referred to in our data sets. Only after 1997, the Hamburg Accord was made less binding: the Council of the Ministers of Education encouraged the states to deviate from the traditional school entry cut-off date of end of June to allow later cut-off dates (usually up to the end of September). This increased even further the discretion that schools 4 Note that the real start of the school year slightly varies over calendar year and state: whereas August 1st is the official nationwide school starting date, the actual starting dates vary by calendar year and state in order to avoid traffic jams on the motorways during vacation times.
110
P. A. Puhani, A. M. Weber
and parents already had de facto. However, today most state laws are still referring to June as the cut-off date while explicitly allowing for discretion. Some states (Baden-Württemberg, Bayern, Berlin, Brandenburg, Thüringen) have recently chosen later cut-off dates. Apart from the school entry regulations, tracking is another feature of the German education system important to the analyses in this article. After 4 years in primary school, pupils usually change to one of three secondary school tracks.5 The most vocational and least academic level of secondary schooling is called Hauptschule (grades 5–9), the intermediate level Realschule (grades 5–10) and the most academic level Gymnasium (grammar school, grades 5–13).6 Track selection is important, as only graduation from Gymnasium directly qualifies for university or polytechnic tertiary education. Hauptschule and Realschule are supposed to be followed by vocational training within the German apprenticeship system. The distribution of pupils across the three tracks varies by state, but for Germany as a whole it is about equal. Although there are ways to enter the Gymnasium track after Hauptschule, Realschule or apprenticeship training, the track selection after primary school is a key decision for the economic and social life of a person in Germany (Dustmann 2004). Note that Germany also has comprehensive schools (Gesamtschulen) as well as schools for children with special needs, mostly due to physical or mental disabilities (Sonderschulen). There are also so-called Waldorf schools that follow a special pedagogy which does not give marks to pupils, for example. In the year 2003, only 17% of graduates came from schools outside of the standard tracking system (11% were in comprehensive schools, 6% in special schools and 1% in Waldorf schools), as Fig. 1 shows.
3 Data We use two different data sets measuring educational outcomes at two stages of pupils’ lives. First, the PIRLS of 2001 provides us with internationally standardised test scores and other relevant information for 6,591 German pupils in the fourth grade of primary school. Second, we use administrative data on all pupils from the state of Hessen in the school year 2004/2005 who entered primary school between 1997 and 1999. The observed cohorts overlap with those tested in the PIRLS study.7 Our estimation sample thus contains 182,676 observations. More detail is given in the following subsections. 5 In the East German States of Berlin and Brandenburg, primary school goes up to grade 6, so that the selection into school tracks starts 2 years later there than in the rest of Germany. 6 In the East German states of Sachsen and Thüringen, Gymnasium ends after grade 12. In the small West German state of Rheinland-Pfalz, Gymnasium nowadays ends after twelve and a half years of schooling. Most states are currently planning to have Gymnasium end after grade 12, but this is not relevant for our samples. 7 We also tried to obtain administrative pupil statistics from other German states, but were either denied access or told that an essential variable for our analysis is missing.
111
Does the early bird catch the worm?
Special Schools 6%
Waldorf(Private) School 1%
Comprehens. School 11%
Lower Secondary (Hauptschule) 28%
Higher Secondary (Gymnasium) 28% Intermediate Secondary (Realschule) 26% Source: German Federal Statistical Office (2004): Fachserie 11 / Reihe 1: Bildung und Kultur, Schuljahr 2003/04, Wiesbaden.
Fig. 1 The German tracking system: graduates in 2003
3.1 The Progress in International Reading Literacy Study (PIRLS) The PIRLS data has been collected by the International Association for the Evaluation of Educational Achievement (IEA) and includes test scores of an internationally conducted standardised reading literacy test as well as background information on pupils and parents. The underlying reading literacy tests refer to basic competences which are crucial in key situations of daily life and skills required in order to be able to succeed in future education, vocational training and professional life (cf. Bos et al. 2003). More specifically, reading achievement is assessed by different items covering four defined ‘reading processes’. These different aspects of reading literacy relate to the ability to ‘focus on and retrieve explicitly stated information’, ‘make straightforward inferences’, ‘interpret and integrate ideas and information’ and ‘examine and evaluate content, language and textual elements’ (Gonzalez and Kennedy 2003). Each child answers two out of eight ‘blocks’ of the entire test and individual achievement is scaled using item response theory methods (the scaling methodology is explained in Gonzalez and Kennedy 2003). In order to conduct international comparisons, these test scores have been standardised so that the international mean is 500 and the standard deviation equals 100. For Germany the mean equals 539 and the standard deviation is 67. Overall, 7,633 pupils at the end of fourth grade in 211 primary schools are sampled in the German PIRLS data. Because the sampling units are schools rather than pupils, all of our results presented in the following sections use
112
P. A. Puhani, A. M. Weber
standard errors adjusted for clustering. We also use the sampling weights provided in the data set. As we lack information on the age of school entry (to the month) for more than 1,000 observations, our effective sample size is reduced to 6,591.8 As we are interested in estimating the effect of age of school entry on educational outcomes, we might like to sample a birth or school entry cohort and estimate the effect of interest after 4 years of schooling, no matter which grade pupils have achieved by then. The other possibility is to measure educational outcomes at the end of primary school irrespective of how long it took the pupil to reach grade 4. The advantage of the latter approach is that the pupil’s performance at grade 4 of primary school is what matters in the end for the secondary school track recommendation he or she receives. As the PIRLS data samples pupils in grade 4, we can only identify the parameter associated with the latter approach, except that it is not an entry cohort, but an exit cohort (fourth graders at the end of primary school) that is sampled. In our data, 86% of pupils have entered school in 1997, whereas eleven and 2% have entered in 1996 (grade repeaters) and 1998 (grade skippers), respectively. Hence, we observe pupils once they have reached grade 4, even if they have spent only three or even 5 years in school. If grade repetition and skipping behaviour has not changed significantly between these neighbouring cohorts, our results should be roughly representative for the 1997 school entrants.
3.2 Administrative data on all pupils in the state of Hessen The second data source we use is ‘Pupil-Level Data of the Statistics of General Schools for the State of Hessen’ (Hessische Schülereinzeldaten der Statistik an allgemein bildenden Schulen). It covers all pupils in general education in the school year 2004/2005 and is collected on behalf of the state Ministry of Education. To our knowledge, this is the first research article using this individual-level administrative data. The original data set contains 694,523 observations from 1,869 schools. As it does not contain any school marks or test scores, we use the track attended in 2004/2005 by pupils having entered school between 1997 and 1999 as the outcome variable. This leaves us with 182,676 observations, 93% of them in grades 6–8. Tracks are coded according to the years of schooling they imply: 13 for Gymnasium (grammar school), 10 for Realschule and 9 for Hauptschule. Pupils at comprehensive schools (Gesamtschule) are frequently allocated to an internal track that corresponds to Gymnasium, Realschuleor Hauptschule, as well. In this case, the administrative data codes them as if they were in these 8 The age of school entry is unfortunately not missing at random: immigrants and pupils whose parents have a comparatively low level of education are overrepresented among the missing observations. If age of school entry is also missing systematically for pupils with unobserved characteristics that are relevant to educational outcomes, our estimates based on the selected sample might be biased. However, as we control for parental background and immigrant status, which is likely to be correlated with these characteristics, we hope to reduce this potential bias markedly.
113
Does the early bird catch the worm?
schools. If no such information is given, we code them as 10, i.e. equivalent to Realschule. Pupils in special schools (Sonderschule) are allocated code 7.9 In the following section, we provide more detail on theoretical and actual age of school entry in our data and suggest an instrumental variable strategy for estimating the effect of age of school entry on educational outcomes. 4 The exogeneity of month of birth and first stage regressions 4.1 The endogeneity of age of school entry Regressing educational outcomes on age of school entry by ordinary least squares regression (OLS) must be expected to yield biased estimates rather than the causal effect of age of school entry on educational results. The reason is that the school entry decision is influenced not just by regulations like the Hamburg Accord, but also by the child’s development as well as the parents’ and the school’s judgements (cf. Sect. 2). Thus, ambitious parents may want to push for an early school entry (at age 5) of their child or children with learning problems might be recommended to enter school 1 year later (at age 7) than prescribed by official regulations. These mechanisms suggest that on average, less able pupils will enter school at a later age and thus OLS estimates of age of school entry effects on educational outcomes should exhibit a downward bias. Figure 2 displays the distributions of the actually observed school entry age and the theoretical entry age according to the ‘Hamburg Accord’. The theoretical school entry age I(bi , si ) is related to a child’s month of birth bi and the month the school year starts si in the following way: ⎧ (72 + s ) − b i i ⎪ ⎨ 12 I(bi , si ) = ⎪ ⎩ (84 + si ) − bi 12
if 1 ≤ bi ≤ 6 (1) if 6 < bi ≤ 12
where the theoretical school entry age I(bi , si ) is measured in years (in decimals up to the month). The indicator for the month of birth bi ranges from 1 to 12, whereas the variation in si is generally between the end of July, August, or the beginning of September. If bi and si are exogenous, the theoretical school entry age I(bi , si ) is exogenous and can be used as an instrument for the actual age of school entry. Note that the start of the school year si varies over calendar 9 About 0.86% of pupils in the original sample are still in primary school when we observe them: they are excluded from the sample in the reported estimates since we do not know which track they will be assigned to. To check in how far these pupils affect our results, we carry out a rather extreme robustness check by allocating code 4 to individuals still in primary school, which indicates the fact that they failed to move to secondary school in time. We carry out a further sensitivity check by excluding pupils in comprehensive and special schools. Pupils in Waldorf schools are not separately identified: they are like comprehensive schools. Note that private schools are included in our sample: 10,709 pupils are in private schools, about 76% of whom attend grammar school (Gymnasium).
114
P. A. Puhani, A. M. Weber PIRLS 2001 Observed age at school entry
Theoretical age at school entry
10
8
8
Percent
Percent
6 4 2
6 4 2 0
0 5
5.5
6
6.5
7
7.5
8
8.5
9
5
5.5
6
6.5
entry age
7
7.5
8
8.5
9
8
8.5
9
entry age
Pupil-Level Data of the Statistics of General Level Schools Hessen Observed age at school entry
Theoretical age at school entry
8
10 8
Percent
Percent
6 4 2
6 4 2
0
0 5
5.5
6
6.5
7
7.5
8
8.5
9
5
5.5
entry age
6
6.5
7
7.5
entry age
Note: Theoretical age at school entry according to the ‘Hamburg Accord’. Sources: PIRLS 2001. Pupil-Level Data of the Statistics of General Schools for the State of Hessen provided by the State Statistical Office (Hessisches Statistisches Landesamt).
Fig. 2 Observed and theoretical age at school entry
year and state. Since we do not have a state identifier in the PIRLS data we assume that August 1st, which is the official nationwide school starting date, is the actual starting date. For the cohorts we observe in the state of Hessen, the first year of primary school always started in August. From Fig. 2, it is clearly visible that the actual distribution of age of school entry is far more dispersed and skewed to the right than the distribution prescribed by the Hamburg Accord (the skewness is positive and ranges from 0.33 to 0.50). This is because many parents/schools have children start school 1 year later than suggested by the regulations. However, a few children also start school 1 year earlier at about age 5. Despite of that, the large majority of pupils start school at the prescribed age. A further graphical illustration of the degree of compliance with the age of school entry rule discussed in Sect. 2 is provided in Fig. 3. The first panel displays the actual age of school entry by month of birth in the PIRLS data together with the theoretical age according to the Hamburg Accord. Visual inspection suggests a significant correlation between the theoretical and the actual age of school entry. However, children born from October to June enter school a little older on average than prescribed by the Hamburg Accord. This is consistent with the graphs in Fig. 2 showing that late entry is more frequent than early
115
Does the early bird catch the worm? PIRLS 2001
7.5 7.3 7.1
age
6.9 6.7 6.5 6.3 6.1 5.9 1
2
3
4
5
6
7
actual age
8
9
10
11
12
(Hamburg Accord)
Pupil-Level Data of the Statistics of General Level Schools Hessen
7.5 7.3 7.1
age
6.9 6.7 6.5 6.3 6.1 5.9 1
2
3
4
5
6
7
actual age
8
9
10
11
12
(Hamburg Accord)
Sources: PIRLS 2001. Pupil-Level Data of the Statistics of General Schools for the State of Hessen. Own computation
Fig. 3 Observed and theoretical entry ages by birth month
entry. However, for those born between July and September, the average age of school entry is lower than prescribed by the Hamburg Accord illustrating the fact that close to the cut-off point, many parents decide for their children to enter school early. A similar picture concerning non-compliance with the cut-off date of the Hamburg Accord arises in the second panel of Fig. 3. In the administrative data for Hessen, pupils born just after the cut-off date ‘end of June’ enter school earlier on average than demanded by the Hamburg Accord. 4.2 Identification strategy In order to estimate the causal effect of age of school entry on educational outcomes, we adopt an instrumental variable identification strategy (two-stage least
116
P. A. Puhani, A. M. Weber
Table 2 Variables included in the regression models Group of regressors
PIRLS 2001
Administrative data for Hessen
Specification 1 Specification 2
Entry age only Specification 1 + gender
Specification 3
Specification 2 + cultural variables (immigranta ) Specification 3 + parental educationb Specification 4 + family backgroundc
Entry age only Specification 1 + gender + entry cohorts + county indicators Specification 2 + cultural variables (country of origin)
Specification 4 Specification 5
a Immigrant background is controlled for by a dummy variable indicating whether the student or his/her parents were born abroad or if the student often speaks a foreign language at home b Three categories of parental education are defined: (1) academic education, (2) non-academic degree, (3) no vocational degree c Includes the number of siblings and its square and the number of books at home
squares, 2SLS). The instrument for the endogenous age of school entry is the theoretical age of school entry as prescribed by the Hamburg Accord, where the school starting month is set to August as explained in the previous subsection: I(bi , si = 8). In order for the instrument to be valid, it has to be both correlated with the actual age of school entry and uncorrelated with unobserved factors influencing educational performance in a prospective regression equation. In order to gauge whether the instrument is truly exogenous, i.e. uncorrelated with any unobserved factors that might influence educational performance, an assumption we cannot test directly, we test whether it is correlated with observed variables that we believe might influence educational performance. Table 2 lists the groups of regressors that we include in the 2SLS instrumental variable estimation models. Note that the regressors enter both in the first-stage (as discussed below in this section) and in the second-stage regressions (as discussed in Sect. 5). The set of variables is partly determined by data availability in the respective data sets. In the first set of regressions (specification 1) we include no regressors in the model except age of school entry as the variable to be instrumented. The justification for this procedure is that if the instrument (driven by variation in month of birth) is completely random and therefore exogenous, no other control variables are required in order to estimate the causal effect of age of school entry on educational outcomes consistently in a 2SLS estimation procedure. Nevertheless, control variables that influence educational outcomes may reduce the standard errors of the estimates. As a first extension of the set of regressors (specification 2), we therefore include gender and regional indicators (the latter are only available in the data for the state of Hessen). In the administrative data for Hessen, we also control for the school entry cohort among ‘specification 2’. The third set of regressors (specification 3) adds cultural background, measured either by an immigration or nationality indicator. The fourth extension (specification 4) adds parental education, which is available in the PIRLS data but not in the administrative data for Hessen. The fifth addition (specification 5) adds family background variables, i.e. the number of books at home and the number of siblings, which is
Does the early bird catch the worm?
117
again only possible for the PIRLS data. We consider the control variables added in ‘specification 5’ as potentially problematic, as they might be an outcome of pupils’ (potential) performance and hence be endogenous: for example, parents might be more likely to buy books if their children are (expected to be) performing well in school. Hence, controlling for these sets of variables may take out some of the effect that age of school entry has on educational outcomes. Although low correlations between the instrument and observable variables are supportive of the instrument’s exogeneity, they do not provide a guarantee. Recent evidence from medical studies suggests that birth month, which drives our instrument, might exert some direct effect on physical and psychological health (e.g. Willer et al. 2005). Furthermore, our instrument might be endogenous if parents plan the month in which a child is born or if, for example, better educated parents prefer certain birth months over others (cf. the discussion in Bound et al. 1995). Therefore, we do not exclusively rely on a ‘traditional’ instrumental variable approach. Drawing on a ‘fuzzy regression discontinuity design’ (cf. Hahn et al. 2001), our main results relate to a narrow sampling window where only students born in the 2 months adjacent to the respective school entry cut-off point are included in the 2SLS regressions. By restricting the samples to persons born just in June and July, we hope to eliminate any potential direct seasonal effects which might affect the validity of the instrument. Furthermore, any differences in parental attitudes reflected in planned timing of births should be minimised for children born in 2 adjacent months, as it is hard to assure for a child to be born in a very specific month. In Tables 3 and 4 we display the simple correlations between the instrument and the full set of control variables for different sampling windows. Correlations significant at the 10 or 5% level are marked with one or two asterisks, respectively. As Table 3 shows, the maximum correlation for the PIRLS data equals 0.02 in absolute value, which is very small. Hence, the few correlations of the instrument with regressors that are significantly different from zero are very close to zero. This finding is even more striking for the large administrative data set for Hessen in Table 4: no correlation is larger than 0.01 in absolute value. Our instrument (driven by month of birth) thus seems unrelated to gender, the district of residence and the country of origin. Table 3 also shows that the instrument is virtually unrelated to parental education, the number of siblings and the number of books in the household.
4.3 First-stage regressions Having discussed the exogeneity of our instrument and the use of different sampling windows, we now check the second condition for a valid instrument, namely the (partial) correlation with the variable to be instrumented (age of school entry). Tables 5 and 6 report coefficients of the instrument together with the F-statistics of the tests for significance of the instrument in the first-stage regressions of the 2SLS estimation procedure. A rule of thumb states that an
118
P. A. Puhani, A. M. Weber
Table 3 Simple correlations between instrument and observables (PIRLS) Sampling window/ observable characteristics
June/July
June–September
January–December
Added in specification 2: gender (reference = female) Male 0.03 0.02 0.00 Added in specification 3: immigration (reference = no immigrant background) Immigrant 0.04 0.02 0.00 Missing: Immigrant −0.03 0.00 −0.02 Added in Specification 4: parental education (reference = no vocational degree) Father: academic degree 0.00 0.01 0.00 Mother: academic degree −0.02 −0.01 0.00 Father: non-academic degree 0.03 0.01 0.01 Mother: non-academic degree 0.02 0.00 0.00 Missing: education of father −0.03 −0.01 0.00 Missing: education of mother −0.01 0.00 0.00 Added in specification 5: family background Number of siblings −0.01* 0.00 0.01 Missing: Number of siblings −0.05 −0.02 −0.02** Log number of books at home 0.02 0.02 0.01 Missing: Log number of books −0.03 −0.02 −0.01 Number of observations 1,123 2,943 6,591 Note: *Significant at the 10% level. **Significant at the 5% level. The different specifications (specifications 1–5) are explained in Table 2. Specification 1 includes only the age of school entry Source: PIRLS 2001. Own calculations
F-statistic below about ten is indicative of a weak instrument problem (Staiger and Stock 1997; Stock et al. 2002).10 The tables therefore display the F-statistics for various specifications (specification 1–5) as outlined in Sect. 4.2. Tables 5 and 6 clearly show that, in both data sets, we have an instrument with an F-statistic largely above the threshold value of ten. The degree of compliance with the rule can be seen from the coefficients reported in the tables. Using the narrowest sampling window of persons born in the 2 months adjacent to the respective cut-off date reveals that the compliance with the Hamburg Accord is significant with a coefficient of 0.40 in the PIRLS data (Table 5) and 0.41 in the Hessen data (Table 6). In the discontinuity sample, this means that the share of compliers is about 40%. The coefficient is slightly higher if we widen our sampling window to include pupils born until the end of September. Note that using the full samples of pupils born in any month (January–December sampling window) the degree of compliance is also influenced by the compliance with the assigned variation in school entry age between individuals born in months like January or April, i.e. born in months distant from the official cut-off dates. We expect that non-compliance is lower for persons born further away from the cut-off date which is confirmed by Fig. 3. Indeed, the coefficients 10 If instruments are weak, the 2SLS estimator has a high standard error and inference using asymptotic approximations for the standard errors is not reliable. Furthermore, already a very small correlation between the instrument and the error term of the outcome equation may lead to significant inconsistencies if instruments are weak (Bound et al. 1995). In other words, 2SLS with weak instruments is generally not appropriate.
119
Does the early bird catch the worm?
Table 4 Simple correlations between instrument and observables (administrative data for Hessen) Sampling window
Added in specification 2: gender (reference = female), entry cohort (refer. = 1997) and county indicators Gender dummy variable (male = 1) School entry in 1998 School entry in 1999 County indicator 1 (Darmstadt) County indicator 2 (Frankfurt) County indicator 3 (Offenbach Stadt) County indicator 4 (Wiesbaden) County indicator 5 (Bergstraße/Odenwald) County indicator 6 (Darmstadt-Dieburg) County indicator 7 (Groß-Gerau) County indicator 8 (Hochtaunus) County indicator 9 (Main-Kinzig) County indicator 10 (Offenbach) County indicator 11 (Rheingau-Taunus) County indicator 12 (Offenbach) County indicator 13 (Wetterau) County indicator 14 (Gießen) County indicator 15 (Lahn-Dill) County indicator 16 (Limburg-Weilburg) County indicator 17 (Marburg-Bied./Vogelsb.) County indicator 18 (Kassel Stadt) County indicator 19 (Fulda/Hersfeld-Rotenb.) County indicator 20 (Kassel/Werra-Meißner) County indicator 21 (Schwalm-Ed./Waldeck-F.) Added in specification 3: country of origin Country 1 (German speaking countries) Country 2 (Turkey) Country 3 (Italy and Greece) Country 4 (Former Yugoslavian states) Country 5 (Remaining “Western” countries) Country 6 (Eastern Europe; former Soviet Union) Country 7 (Remaining Muslim countries) Country 8 (Remaining Asia) Country 9 (Remaining countries) Number of observations
June/July
June– September
January– December
0.00 0.00 0.01* 0.00 0.01 0.00 0.00 0.01 −0.01 −0.01** 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.01** −0.01 0.00 −0.01 0.00 0.00
0.00 0.00 0.01 0.00 0.00 0.00 0.00 0.01 −0.01 −0.01* 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.01 0.00 0.00 0.00 0.00 0.00
0.00 0.01** 0.00* 0.00 0.00 0.00 −0.01** 0.01** 0.00 −0.01** 0.00 0.00 0.00** 0.00 0.00 0.00 0.00 0.00 0.00** 0.00 0.00** 0.00 0.00 0.00
0.00 0.00 −0.01** 0.01 −0.01
0.01 −0.01 −0.01** 0.00 0.00
0.01** −0.01** −0.01** 0.00 0.00
0.00 0.00 0.00 0.00 32,059
0.00 0.00 0.00 0.00 64,072
0.00 0.00** 0.00 0.00 182,676
Note: *Significant at the 10% level. **Significant at the 5% level. The different specifications (specifications 1–3) are explained in Table 2. Specification 1 includes only the age of school entry Source: Student-level data of the statistics of general schools for the state of Hessen 2004/2005 provided by the State Statistical Office (Hessisches Statistisches Landesamt). Own calculations
of the full sample amount to 0.49 and are thus somewhat higher than in the smaller sampling windows. In sum, the estimated first-stage coefficients and their F-statistics confirm the picture given in Fig. 2 that compliance with the school entry rules is considerable, but not perfect. One has to keep in mind that 2SLS estimation identifies the causal effect of age of school entry using only the exogenous variation in the age of school entry generated by ‘compliers’, i.e. those persons who react to variations in
120
P. A. Puhani, A. M. Weber
Table 5 First-stage results (PIRLS) Sampling window/specification
June/July
June–September
January–December
Specification 1 (F-statistic) Specification 2 (F-statistic) Specification 3 (F-statistic) Specification 4 (F-statistic) Specification 5 (F-statistic) Observations
0.40** (86.7) 0.40** (89.1) 0.40** (90.6) 0.40** (94.6) 0.40** (95.1) 1,123
0.42** (147.2) 0.42** (147.8) 0.42** (147.4) 0.42** (150.9) 0.42** (150.6) 2,943
0.49** (433.1) 0.49** (427.1) 0.49** (426.5) 0.49** (440.8) 0.49** (428.6) 6,591
Note: *Significant at the 10% level. **Significant at the 5% level. The different specifications (specifications 1–5) are explained in Table 2 Source: PIRLS 2001. Own calculations Table 6 First-stage results (administrative data for Hessen) Sampling window/specification
June/July
June–September
January–December
Specification 1 (F-statistic) Specification 2 (F-statistic) Specification 3 (F-statistic) Observations
0.41** (2277.1) 0.41** (2306.4) 0.41** (2325.5) 32,059
0.45** (3504.3) 0.45** (3524.6) 0.45** (3567.7) 64,072
0.49** (8196.0) 0.49** (8189.0) 0.49** (8321.2) 182,676
Note: *Significant at the 10% level. **Significant at the 5% level. The different specifications (specifications 1–3) are explained in Table 2 Source: Student-Level data of the statistics of general schools for the state of Hessen 2004/2005 provided by the State Statistical Office (Hessisches Statistisches Landesamt). Own calculations
the instrument (Imbens and Angrist 1994). Although the 2SLS model implicitly assumes that the effect of age of school entry is homogeneous across the population, the estimate is an equivalent of the local average treatment effect (LATE) as introduced in Imbens and Angrist (1994) for binary instruments.11 Therefore, the results discussed in the following section may not be representative for the pupil population as a whole. Non-compliers are likely to be either particularly weak pupils who enter school later than prescribed or strong 11 We also tried further instruments based on other cut-off dates (results are reported in the discussion paper version, Puhani and Weber 2005). We assume that persons reacting to the end of June (the Hamburg Accord) as cut-off are more representative for the average pupil, unlike those reacting to alternative rules. For example it is plausible that the group of pupils born in August and entering school at the age of just about six (younger than prescribed by the Hamburg Accord) are above-average achievers and hence distinct from the representative pupil. If virtually all ‘compliers’ born in August and September are high achievers, it may be that the ‘compliers’ for an instrument based on the end of August as the cut-off date are affected differently by the variation in the age of school entry than compliers with the official rule of the Hamburg Accord. This hypothesis is confirmed in the discussion paper version (Puhani and Weber 2005).
121
Does the early bird catch the worm? Table 7 OLS and second-stage results (PIRLS) Sampling window/ specifications
January–December
June/July
June–September
January–December
Estimate Specification 1 (s.e.) Specification 2 (s.e.) Specification 3 (s.e.) Specification 4 (s.e.) Specification 5 (s.e.) Observations
OLS −12.80** (3.0) −11.49** (3.0) −8.65** (2.7) −4.57** (2.3) −1.24 (2.2) 6,591
2SLS 28.17** (13.2) 28.18** (13.1) 28.98** (12.6) 26.41** (11.5) 25.83** (11.2) 1,123
2SLS 32.87** (11.3) 33.24** (11.3) 34.29** (11.0) 33.20** (10.2) 31.67** (9.7) 2,943
2SLS 30.74** (6.2) 30.64** (6.3) 27.14** (6.2) 27.37** (5.8) 26.77** (5.6) 6,591
Note: *Significant at the 10% level. **Significant at the 5% level. The different specifications (specifications 1–5) are explained in Table 2 Source: PIRLS 2001. Own calculations
performers who enter school earlier than suggested, or they might be children of parents who have strong views on the age at which their child should enter school and consequently would not respond to cut-off dates. Having justified the instrument in terms of exogeneity and (partial) correlation with the age of school entry, we present the results of the second stage of the 2SLS estimates in the following section. 5 The effect of age of school entry on educational outcomes 5.1 Ordinary least squares results Tables 7 and 8 report the estimated effects of age of school entry on educational outcomes from regressions with different sets of control variables (‘specification 1’ in the first line indicating no control variables, and the last line indicating the full set of control variables as listed in Table 2). Note, that while in the PIRLS data set the outcome measure is the fourth grade reading test score, in the Hessen data the outcome relates to the secondary school track which is coded by years of education necessary for the completion of the degree corresponding to the track (2SLS estimation). Alternatively, we define a binary response variable for attendance of the highest secondary track (Gymnasium) in the administrative data for Hessen and estimate a probit instrumental variable model instead of 2SLS. The first columns of Tables 7 and 8 show the OLS regression coefficients for the full samples (pupils born in January to December). In both data sets, the regression coefficient is negative and significantly different from zero if no control variables are included (specification 1). This means that educational outcomes and age of school entry are negatively correlated: pupils who enter school at a later age achieve less than their peers entering at a younger age.
122
P. A. Puhani, A. M. Weber
Table 8 OLS and second-stage results (administrative data for Hessen) Sampling window/ specification
January– December
June/July
June–September
January–December
Estimate:
OLS
2SLS
Probit-IV
2SLS
Probit-IV
2SLS
Probit-IV
Specification 1 (s.e.) Specification 2 (s.e.) Specification 3 (s.e.) Observations
−0.37** (0.01) −0.36** (0.01) −0.31** (0.01) 182,676
0.40** (0.05) 0.38** (0.05) 0.37** (0.05) 32,059
0.12** (0.01) 0.12** (0.01) 0.12** (0.01)
0.45** (0.04) 0.44** (0.04) 0.42** (0.04) 64,072
0.12** (0.01) 0.12** (0.01) 0.12** (0.01)
0.45** (0.03) 0.44** (0.03) 0.41** (0.03) 182,676
0.11** (0.01) 0.11** (0.01) 0.10** (0.01)
Note: 2SLS coefficients indicate the marginal effect of higher age at school entry on years of education according to the current track. An effect of 0.40 years of schooling corresponds to a 12% increase in the probability to attend the higher level school versus the lower level schools. Probit instrument variable estimates report the estimated change in the probability to attend the highest level secondary school (Gymnasium) if school entry is at age 7 compared to age 6, where control variables are set to their mean. Estimates were obtained using the statistical software ‘Stata’. The standard errors of estimated effects reported in the Probit-IV columns are calculated using the ‘delta method’. *Significance at the 10% level, **significance at the 5% level. The different specifications (specifications 1–3) are explained in Table 2 Source: Student-level data of the statistics of general schools for the state of Hessen 2004/2005 provided by the State Statistical Office (Hessisches Statistisches Landesamt). Own calculations
However, as we include more and more control variables into the regressions (specifications 2ff.), the OLS coefficients decrease in absolute value in both data sets indicating that actual age of school entry is influenced by factors relevant to educational performance. This is highly suggestive of age of school entry being an endogenous variable, which warrants instrumental variable estimation. 5.2 Two-stage least squares results What happens to the estimated effect of age of school entry on educational outcomes if we apply 2SLS estimation with the instrument discussed in Sect. 4? A glance at Tables 7 and 8 reveals first that instrumental variable estimation switches the sign of the estimated effect from negative to positive in both data sets. Second, the 2SLS estimates are all positive and significantly different from zero. Third, the differences between the point estimates of different sampling windows are smaller than a standard deviation of the narrowest sampling window. Fourth, the size of the estimated effects hardly varies by the choice of control variables (i.e. between ‘specification 1’ and ‘specification 5’/ ‘specification 3’ in Tables 7 and 8, respectively): indeed, the variation of the 2SLS estimates within a column is virtually always less than any estimated standard error of a coefficient in that column. In the following, we will discuss the 2SLS results in detail by data set. As reasoned in Sect. 4.2, the inclusion of more control variables in the 2SLS regressions mostly reduces the standard error of the estimated coefficient on
Does the early bird catch the worm?
123
age of school entry (as we move from ‘specification 1’ to ‘specification 5’) in the PIRLS data set (Table 7). The main finding in Table 7 is that the estimated effect of age at school entry on educational outcomes varies from 25.8 to 29.0 test scores in the narrowest sampling window and is rather robust with estimates ranging from 26.8 to 34.3 when using wider sampling windows. How can the results be interpreted? A representative estimate based on the narrowest sampling window (discontinuity sample) is an increase in test scores of around 27 points for entering school 1 year older (being about 7 instead of 6 years old). This is about two fifths of the standard deviation of test scores in PIRLS. More intuition for the size of this effect is derived from a comparison of the differences in test scores between the different German school tracks in the PISA 2000 study (where ninth graders’ reading literacy is tested).12 In the PISA data for ninth graders, the differences in test scores are 0.78 standard deviations between pupils in Gymnasium and Realschule and 1.01 standard deviations between Realschule and Hauptschule (Baumert et al. 2003). Therefore, our estimates imply that entering school 1 year older increases reading literacy by more than half of the difference between the average Gymnasium track and the average Realschule track performance. This is quite a substantial effect and indicates that age of school entry may influence track choice, as also shown in the following paragraphs. Table 8 presents the effects of age of school entry on track attendance in the middle of secondary school. Results are based on administrative data for the state of Hessen. The outcome is measured by the number of school years associated with each track as outlined in Sect. 3.2. Alternatively, we show effects of probit instrumental variable estimations indicating the change in the probability to attend the higher level secondary school (Gymnasium) which is due to school entry at 7 instead of 6 years while the control variables are set to their mean. Because the administrative data for Hessen is large in terms of number of observations (in fact we observe the population), the reported ‘standard errors’ in Table 8 all indicate significance. As to the estimated effect of age of school entry on educational outcomes using the Hamburg Accord as instrument, the 2SLS estimation for different sampling windows yields comparable estimates in the ranges of 0.37–0.40 for the narrowest sampling window and 0.41–0.45 for the wider sampling windows. There is only minor variation among specifications with different sets of control variables.13 Entering school at the age of 7 rather than 6 raises secondary schooling by almost half a year, around 5 months 12 We do not use the PISA data for our estimations, because it does not contain the required information. 13 The reported coefficients would be similar but somewhat higher if we did not exclude persons still in primary school from the sample. If we include primary school pupils (with code 4 as the outcome, cf. footnote 9), the coefficients related to the narrowest (widest) sampling window range between 0.43 and 0.46 (0.46 and 0.49). Hence, early school entry seems to increase the likelihood of repeating grades in primary school. As a further robustness check we exclude pupils in comprehensive and special schools (Gesamtschule and Sonderschule). In this case the effects are only slightly different from the presented effects and range between 0.36 and 0.39 (0.42 and 0.47).
124
P. A. Puhani, A. M. Weber
(assuming pupils will complete the track which they attend in the middle of secondary school, when we observe them). This effect is implied if a deferral of school entry by 1 year increases the probability of attending Gymnasium instead of Realschule by about 13% points. The estimated effect is potentially driven by both increases in the probability to attend Realschule rather than Hauptschule and increases in the probability to attend Gymnasium rather than Realschule. In order to find out which of these effects drives the results, we first estimate linear probability models of Gymnasium versus Realschule/Hauptschule attendance as well as of Gymnasium/Realschule versus Hauptschule attendance. Estimates were obtained by 2SLS using the same instrument and control variables as in Table 8. The results show increases of Gymnasium versus Realschule/Hauptschule attendance by between 11 and 13% points and increases of Gymnasium/Realschule versus Hauptschule attendance of about 2–3% points. The numbers are very robust and significant across different specifications. Hence, it seems that the age of school entry matters for achieving Gymnasium attendance, which is the step towards university education and high labour market returns. Subsequently, we estimate probit instrumental variable models of the probability to attend Gymnasium rather than Realschule/Hauptschule. The estimated effect of entering school 1 year older (evaluated at the mean of the control variables) is 12% points using the first two sampling windows and between 10 and 11% points using the full sample. Hence, all our estimation procedures (2SLS with school track coded according to the years needed to complete the track, 2SLS linear probability models and probit instrumental variable models) lead to virtually the same conclusions regarding Gymnasium versus Realschule/Hauptschule attendance. Note, however, that we do not have statistics on the percentage of pupils having attended Gymnasium in grade 6 who complete Gymnasium by obtaining the Abitur degree (equivalent to British A-levels). Back-of-the-envelope calculations based on administrative data for Hessen suggests that around 20% of pupils attending Gymnasium in grade 6 have left Gymnasium in grade 10 in Hessen. There might be further attrition in grades 11–13 (when Gymnasium ends). However, as pupil panel data currently does not exist to the best of our knowledge, we cannot judge at this stage to what extent our estimates exaggerate the effect of school entry age on final schooling achievement. However, separate estimates by school entry cohort suggest that the estimated effect shows no declining trend for older cohorts. Hence, with the data at hand, we have no indication that mobility between school tracks neutralises age of school entry effects in the middle of secondary school.
5.3 Results for subgroups Having established robust evidence that a relatively older age of school entry improves educational outcomes, we carry out a subgroup analysis in Tables 9 and 10 for the two data sets. For the PIRLS data, Table 9 displays first-stage
125
Does the early bird catch the worm? Table 9 Subgroup results for the PIRLS data First stage Male – native (Full sample: 2,642 observations; born June/July: 447 observations) Female – native (Full sample: 2,717 observations; born June/July: 469 observations) Male – immigrant (Full sample: 668 observations; born June/July: 109 observations) Female – immigrant (Full sample: 564 observations; born June/July: 98 observations) Parents: academic degree (Full sample: 1,330 observations; born June/July: 223 observations) Parents: no academic degree (Full sample: 5,261 observations; born June/July: 900 observations)
Full sample (F) Born June/July (F) Full sample (F) Born June/July (F) Full sample (F) Born June/July (F) Full sample (F) Born June/July (F) Full sample (F) Born June/July (F) Full sample (F) Born June/July (F)
Second stage 0.45** (138.9) 0.30** (21.6) 0.56** (244.7) 0.52** (104.5) 0.44** (33.4) 0.43** (17.7) 0.38** (10.8) 0.30** (4.6) 0.35** (45.2) 0.29** (10.1) 0.53** (438.6) 0.43** (97.1)
Full sample (s.e.) Born June/July (s.e.) Full sample (s.e.) Born June/July (s.e.) Full sample (s.e.) Born June/July (s.e.) Full sample (s.e.) Born June/July (s.e.) Full sample (s.e.) Born June/July (s.e.) Full sample (s.e.) Born June/July (s.e.)
42.86** (8.6) 59.83** (22.5) 16.23** (8.4) 7.25 (12.8) 20.50 (20.2) 67.38* (36.2) 37.65 (30.0) −4.06 (62.1) 29.36* (17.0) 32.11 (30.5) 25.71** (5.9) 24.14** (11.6)
Note: Effects for the full specifications (specification 5). *Significant at the 10% level. **Significant at the 5% level. F refers to the F-statistics of significance of the instruments in the first-stage regressions Source: PIRLS 2001. Own calculations
coefficients and F-Statistics as well as second-stage estimation results for native males, native females, immigrant males, immigrant females and for pupils with parents with and without an academic degree, respectively. The estimates are exhibited for two sampling windows, i.e. the full sample and the narrowest ‘discontinuity sampling’ window and refer to the specification with all control variables (specification 5). The main results from the subgroup analysis based on the PIRLS data are that German males benefit more than German females from later school entry: Coefficients are 42.9 (standard error 8.6) versus 16.2 (standard error 8.4) in the full samples, respectively. Due to smaller sample sizes and large standard errors (the latter ranging from 5.9 to 62.1 test scores), the subgroup estimates, especially in the discontinuity samples, are generally harder to pin down. Potentially for the same reasons, some estimated effects for male immigrants (full sample), female immigrants (full and discontinuity sample), for female natives (discontinuity sample) and for pupils with parents holding an academic degree (discontinuity sample) are not significantly different from zero. Note that only the effects for the group of persons who comply with the instrument in the respective subgroup are identified by 2SLS. Therefore,
126
P. A. Puhani, A. M. Weber
Table 10 Subgroup results for the administrative data from the state of Hessen First stage Male – native (German speaking countries) (Full sample: 79,400 observations; born June/July: 13,898 observations) Female – native (German speaking countries) (Full sample: 77,106 observations; born June/July: 13,555 observations) Male – Turkish (Full sample: 5,772 observations; born June/July: 1,009 observations) Female - Turkish (Full sample: 5,647 observations; born June/July: 1,045 observations) Male – predominantly Muslim countries (without Turkey) (Full sample: 1,539 observations; born June/July: 247 observations) Female – predominantly Muslim countries (without Turkey) (Full sample: 1,474 observations; born June/July: 248 observations) Male – Italy/Greece (Full sample: 1,462 observations; born June/July: 271 observations) Female – Italy/Greece (Full sample: 1,419 observations; born June/July: 244 observations) Male – former Yugoslavia (Full sample: 1,217 observations; born June/July: 213 observations) Female – former Yugoslavia (Full sample: 1,190 observations; born June/July: 221 observations)
Full sample (F) Born June/July (F) Full sample (F) Born June/July (F) Full sample (F) Born June/July (F) Full sample (F) Born June/July (F) Full sample (F) Born June/July (F) Full sample (F) Born June/July (F) Full sample (F) Born June/July (F) Full sample (F) Born June/July (F) Full sample (F) Born June/July (F) Full sample (F) Born June/July (F)
Second stage 0.50** (3885.8) 0.41** (1025.0) 0.50** (3845.2) 0.41** (1039.2) 0.46** (221.0) 0.42** (62.5) 0.49** (255.5) 0.45** (88.3) 0.36** (25.0) 0.31** (6.2) 0.35** (26.3) 0.43** (16.0) 0.52** (86.9) 0.37** (22.5) 0.51** (67.1) 0.50** (31.3) 0.46** (48.9) 0.51** (20.1) 0.45** (46.2) 0.38** (15.7)
Full sample (s.e.) Born June/July (s.e.) Full sample (s.e.) Born June/July (s.e.) Full sample (s.e.) Born June/July (s.e.) Full sample (s.e.) Born June/July (s.e.) Full sample (s.e.) Born June/July (s.e.) Full sample (s.e.) Born June/July (s.e.) Full sample (s.e.) Born June/July (s.e.) Full sample (s.e.) Born June/July (s.e.) Full sample (s.e.) Born June/July (s.e.) Full sample (s.e.) Born June/July (s.e.)
0.41** (0.04) 0.35** (0.08) 0.45** (0.04) 0.39** (0.08) 0.21 (0.14) 0.33 (0.23) 0.32** (0.13) 0.32 (0.22) 0.37 (0.41) −0.24 (0.72) 0.55 (0.40) 1.00* (0.55) −0.16 (0.26) 0.34 (0.61) −0.07 (0.27) −0.57 (0.44) 0.04 (0.34) 0.01 (0.51) 0.95** (0.41) 1.09 (0.76)
Note: Effects for the full specifications (specification 3). *Significant at the 10% level. **Significant at the 5% level. F refers to the F-statistics of significance of the instruments in the first-stage regressions Source: Student-level data of the statistics of general schools for the state of Hessen 2004/2005 provided by the State Statistical Office (Hessisches Statistisches Landesamt), data on school starting dates. Own calculations
the estimated ‘LATE’ do not have to be representative for the subgroups in general (for example, if most immigrant males enter school at the age of 7 anyway, the compliers will be a small and unrepresentative group). However, first-stage coefficients show that the degree of compliance is similar for most
Does the early bird catch the worm?
127
subgroups, especially in the full sample. First-stage coefficients in the full sample mainly range between 0.44 and 0.56. Exceptions are immigrant females and pupils whose parents have attained an academic degree, for whom compliance is somewhat lower (the full-sample first-stage coefficients for these two groups are 0.38 and 0.35, respectively). As in Table 9 for the PIRLS data, the estimates in Table 10 for Hessen are shown both for the full (pupils born January to December) and for the discontinuity samples (pupils born June/July) and refer to ‘specification 3’ with all control variables. The subgroup results for the administrative data for the state of Hessen do not confirm that German males benefit more from later school entry than German females. However, the different results from these two data sets need not contradict as PIRLS measures only reading literacy, whereas the secondary school track in the data for Hessen is a more general indicator for educational attainment. In the administrative data for Hessen, we can distinguish between different groups of nationalities (German, Turkish, predominantly Muslim countries without Turkey, Italy/Greece and former Yugoslavia). As sample sizes for all subgroups except Germans and Turks are below 1,600 (full samples) or 300 (discontinuity samples), the standard errors of the second-stage estimates range between 0.26 and 0.76, so that second-stage coefficients for these nationality groups are hard to pin down. We therefore ignored other nationality groups with even smaller sample sizes. The first-stage coefficients for almost all subgroups are close to those of the sample as a whole, exceptions being both males and females from predominantly Muslim countries without Turkey, where compliance is lower (full-sample first-stage coefficients range between 0.35 and 0.36 for these groups compared to between 0.45 and 0.52 for the rest). Although there is some indication based on the first-stage F-statistics that the instruments for these two groups are not that strong, the marginally significant point estimate for females from predominantly Muslim countries without Turkey tentatively suggest that they benefit more than natives from a later age of school entry. However, the large standard errors associated with these estimates make this interpretation somewhat speculative as the difference in the estimated effects is not statistically significant. The smaller point estimates for Turkish than native pupils are also associated with a sizeable standard error making this difference statistically insignificant. We cannot detect any significant effects of age of school entry for male or female pupils from Italy and Greece or for males from former Yugoslavia. However, at least in the full sample, the estimated effect for females from former Yugoslavia is significant and the largest of all groups (0.95), albeit with a sizeable standard error (0.41). In order to find out whether the insignificance of many subgroup estimates can be explained by smaller sample sizes, we drew random sub-samples of native males, a group for which we found a significant effect. Results based on these random sub-samples indicate that the estimates are not robust and generally insignificant when based on less than 1,500 observations, which unfortunately effects almost all of our subsamples on foreigners (with Turkish citizens born
128
P. A. Puhani, A. M. Weber
January to December as the exception). Hence, larger ‘samples’ (we already observe the population) or a higher degree of compliance would be needed to make statistically safe statements on immigrants (defined as non-citizens in the data for the state of Hessen). 6 Conclusions Based on instrumental variable estimation, we recover positive and statistically significant effects on educational outcomes for entering school at a relatively higher age in the current German school system. In the fourth grade of primary school, we find a large effect of about 0.40 standard deviations improvement in the PIRLS test score if the pupil enters at about the age of 7 rather than six (i.e. a year later according to the school entry rule). This amounts to more than half of the difference in the average Gymnasium versus Realschule test scores in the OECD PISA study. Administrative data for the state of Hessen suggest that the effect of age of school entry persists into secondary school by increasing the probability of attending the most academic secondary schooling track (Gymnasium) by 12% points. Assuming that the attended track is completed, this amounts to prolonging the average years of schooling by almost half a year (about 5 months). Compared to Fredriksson and Öckert (2005) and Bedard and Dhuey (2006), who apply an instrumental variable strategy similar to ours to Swedish administrative data and international TIMSS data together with additional data for the US and Canada, respectively, the results for Germany are comparable in size: Fredriksson and Öckert (2005) report that entering school a year later increases ninth graders’ grade point average by about 0.2 standard deviations. Similarly, the effects reported in Bedard and Dhuey (2006) range from 0.2 to 0.5 standard deviations for fourth graders in the countries investigated. Strøm (2004) estimates an effect of 0.2 standard deviations for 15–16 year olds in the Norwegian PISA study, arguing that age of school entry is exogenously driven by regulations in Norway.14 However, these and our estimates differ from those of Angrist and Krueger (1992) and Mayer and Knutson (1999) for the United States, where either no or negative effects for late school entry are reported. The findings for the US can only be partly explained by the fact that quarter 14 Our estimates based on the PIRLS data (0.40 standard deviations) are on the high end of the range of results from other countries. However, in relation to the first-stage coefficients reported for 11 countries in Table 3 of Bedard and Dhuey (2006) as well as those in Fredriksson and Öckert (2005) for Sweden, the degree of compliance with the instrument in Germany is at the very low end in international comparison. As we can only estimate a LATE, the compliers in Germany might be less representative of the average pupil in Germany than in Sweden, for example, were compliance is higher. This might be one reason – apart from differences in school systems, data collection and other factors – why point estimates differ across countries. Indeed, correlating firstand second-stage coefficients for the 11 countries analysed in Table 3 of Bedard and Dhuey (2006) provides a correlation of −0.19 for science and −0.02 for maths test scores in the TIMSS study. Hence, at least for maths, estimates based on a larger degree of compliance seem to be associated with a lower average treatment effect. We thank Peter Fredriksson for pointing this issue out to us.
Does the early bird catch the worm?
129
of birth in the US, unlike in Germany, affects the duration of compulsory schooling: no and negative effects of later school entry are found for persons having obtained post-compulsory schooling in Angrist and Krueger (1992) and Mayer and Knutson (1999), respectively. Given the current trend in Germany to have pupils start school earlier, we interviewed 25 primary school headmasters or headmistresses in the state of Hessen by telephone. We asked them about their views on our finding that late school entry improves educational performance.15 Of the 25 schools, two were operating under a special regime where pupils enter school at the age of 5, but with extra logopedic, German language and nursery teacher support. In these schools, 5–6 olds do not enter grade 1, but ‘grade 0’, which is a mixture between a kindergarten and a school regime. Both schools are satisfied with this regime, as they are able to correct deficits some children have through the extra teaching and nursery resources they have (one of these schools stated that they have a 75% immigrant share). In a third school, we were not able to communicate the substance of our question. However, in the remaining 22 ‘standard’ primary schools, 95% of headmasters or headmistresses (21 out of 22) said they found our results ‘plausible’. Most ‘standard’ primary schools were opposed to early school entry in the current ‘standard’ educational regime, but supported the idea of early school entry if the school system changed to a situation similar to the special regime schools, which have extra support for pupils with learning, language or social problems and a ‘grade 0’ which combines learning with kindergarten elements. In a further telephone survey of ten schools, we told the headmistresses and headmasters that we had found that early school entry was good for children, i.e. we told them the opposite of what we really found in the data.16 It turned out that eight of ten schools disagreed that an early school entry into the current German school system was sensible. However, four of those eight schools would be in favour of earlier school entry if the school system would be adapted to the needs of younger children (more breaks, smaller classes and an adapted curriculum were named as suggestions). It is important to note that our identification strategy does not allow for distinguishing between absolute and relative age effects. If our findings were solely driven by relative age effects (peer effects) this study would not provide insights concerning the merits of changing the official school entry age. All we would be able to tell from our findings would be that it is disadvantageous to be one of the youngest children in a given class. However, from our school survey we have some indication that teachers think that absolute age effects matter most: when we asked our contact persons what they believed could be the reasons for our findings 21 out of 22 school representatives made statements along the lines that older pupils are more mature, are more able 15 We drew 30 telephone numbers of primary schools from the school registry of Hessen until we
managed to talk to 25 of them (three schools refused to be interviewed by telephone and in two of the schools we could not reach a contact person after several trials). 16 We thank Dominique Meurs for suggesting this strategy.
130
P. A. Puhani, A. M. Weber
to concentrate when having to keep still in the classroom for long periods of time, are more able to organise themselves (like keeping their belongings together), are less distracted by play and find it easier to overcome frustration. Only 18% of schools (four out of 22) felt that relative age effects matter, too. The other schools, however, explicitly denied the importance of relative age effects and stressed that it is personal maturity that matters. Similarly, in the second telephone interview of ten schools, the lack of personal maturity (rather than the relative age) was given as the reason why early school entry was not favoured in the current system. This impression concerning the importance of absolute age effects is consistent with findings by Fredriksson and Öckert (2005). If we hence believed that our results were driven by absolute age effects, the policy conclusions would depend on whether we observe a pure absolute maturity effect or a maturity-learning interaction effect. If a pure maturity effect drives our results, changing the age of school entry for all pupils makes no difference to the performance gap of older versus younger pupils. However, if we believed that our results were driven by maturity–learning interaction effects (i.e. pupils starting school at the age of 7 learn more in grades 1–4 than pupils starting school when they are younger) the efficiency of early education could be improved by increasing the school entry age. Yet, positive effects of later school entry would have to be weighed against the economic losses of higher labour market entry ages. All in all, our statistical analysis cannot predict educational implications of changing the official age of school entry regulations. However, we have shown that the age of school entry matters at the individual level under the current school entry regime. In order to separate the underlying causes driving our statistical results (the relative age, the pure maturity and the maturity-learning interaction effect), different data including relative age in the assigned class and on the development of abilities over the life-cycle would need to be collected. In any case, our results should not be interpreted as evidence against early learning per se. Early learning might generally be promising. Which type of early learning works best will be another interesting research agenda, once state governments decide to collect and make available appropriate data in this respect. Acknowledgements This project has been initiated through discussions with Michael Fertig, RWI, Essen. We are also grateful to Andreas Ammermüller, Bernd Fitzenberger, Gianno De Fraja, Peter Fredriksson, Karsten Kohn, Edwin Leuven, Stephen Machin, Dominique Meurs, Kjell Salvanes and three anonymous referees as well as participants of the IZA Summer School 2005 in Buch am Ammersee, participants of the CEPR-IFAU-Uppsala Universitet Second Network Workshop ‘Economics of Education and Education Policy in Europe’ in Uppsala, participants of the ZEW ‘Rhein-Main-Neckar Arbeitsmarktseminar’ in Mannheim and seminar participants at the Universities Paris II (Panthéon-Assas) and St. Gallen for helpful comments. We thank Hans-Peter Hafner of the Research Data Center (Forschungsdatenzentrum) of the Statistical Office of the state of Hessen for help with the administrative data for Hessen. Björn Schumacher provided excellent research assistance. All remaining errors are our own. Part of this research was supported by Anglo-German Foundation within the project “The Economics and Politics of Employment, Migration and Social Justice” which is part of Foundation’s research initiative “Creating Sustainable Growth in Europe”.
Does the early bird catch the worm?
131
References Angrist JD (2004) American education research changes track. Oxf Rev Econ Policy 20:198–212 Angrist JD, Krueger AB (1992) The effect of age at school entry on educational attainment: an application of instrumental variables with moments from two samples. J Am Stat Assoc 87:328–335 Baumert J, Trautwein U, Artelt C (2003) Schulumwelten – institutionelle Bedingungen des Lehrens und Lernens. In: Deutsches PISA-Konsortium (ed) PISA 2000 . Ein differenzierter Blick auf die Länder der Bundesrepublik Deutschland. Verlag Leske + Budrich, Opladen, pp 261–331 Bedard K, Dhuey E (2006) The persistence of early childhood maturity: international evidence of long-run age effects. Working paper, Department of Economics University of California, Santa Barbara. Q J Econ (forthcoming) Bertram T, Pascal C (2002) Early years education: an international perspective. Qualifications and Curriculum Authority, London Bos W, Lankes EM, Prenzel M, Schwippert K, Walther G, Valtin R (2003) Erste Ergebnisse aus IGLU. Waxmann Verlag, Münster Bound J, Jaeger DA, Baker RM (1995) Problems with instrumental variables estimation when the correlation between the instruments and the endogenous explanatory variables is weak. J Am Stat Assoc 90:443–450 Cunha F, Heckman JJ, Lochner L, Masterov DV (2006) Interpreting the evidence on life cycle skill formation. In: Hanushek E, Welch F (eds) Handbook of the economics of education. Elsevier, North-Holland, New York (forthcoming) Currie J (2001) Early childhood education programs. J Econ Perspect 15:213–238 Del Bono E, Galindo-Rueda F (2004) Do a few months of compulsory schooling matter? The education and labour market impact of school leaving rules. IZA discussion paper no. 1233 Dustmann C (2004) Parental background, secondary school track choice, and wages. Oxf Econ Pap 56:209–230 Fertig M, Kluve J (2005) The effect of age at school entry on educational attainment in Germany. IZA discussion paper no. 1507 Fredriksson P, Öckert B (2005) Is early learning really more productive? The effect of school starting age on school and labour market performance. IZA discussion paper no. 1659 Gonzalez EJ, Kennedy AM (2003) PIRLS 2001 user guide for the international database. International Study Center, Lynch School of Education, Boston College, Boston Graue ME, DiPerna J (2000) Redshirting and early retention: who gets the “Gift of Time” and what are its outcomes? Am Educ Res J 37:509–534 Hahn J, Todd P, Van der Klaauw W (2001) Identification and estimation of treatment effects with a regression-discontinuity design. Econometrica 69:201–209 Hanushek EA, Wößmann L (2006) Does educational tracking affect performance and inequality?: differences-in-differences evidence across countries. Econ J 116:63–76 Hutchison D, Sharp C (1999) A lasting legacy? The persistence of season of birth effects. NFER conference paper, presented at the British Educational Research Association Conference, University of Brighton Imbens GW, Angrist JD (1994) Identification and estimation of local average treatment effects. Econometrica 62:467–475 Jimerson S, Carlson E, Rotert M, Egeland B, Sroufe LA (1997) A prospective, longitudinal study of the correlates and consequences of early grade retention. J Sch Psychol 35:3–25 Kern A (1951) Sitzenbleiberelend und Schulreife. Verlag Herder, Freiburg Kinard EM, Reinherz H (1986) Birthdate effects on school performance and adjustment: a longitudinal study. J Educ Res 79:366–372 Leuven E, Lindahl M, Oosterbeek H, Webbink D (2004) New evidence on the effect of time in school on early achievement. HEW 0410001, Economics Working Paper Archive at WUSTL May DC, Kundert DK, Brent D (1995) Does delayed school entry reduce later grade retentions and use of special education services? Remedial Spec Educ 16:288–294 Mayer S, Knutson D (1999) Does the timing of school affect how much children learn? In: Mayer S, Peterson P (eds) Earning and learning: how schools matter. Brookings Institution Press, Washington, pp 79–102 Morrison FJ, Griffith EM, Alberts DM (1997) Nature-nurture in the classroom: entrance age, school readiness, and learning in children. Dev Psychol 33:254–262
132
P. A. Puhani, A. M. Weber
Proctor TB, Black KN, Feldhusen JF (1986) Early admission of selected children to elementary school: a review of the research literature. J Educ Res 80:70–76 Puhani PA, Weber AM (2005) Does the early bird catch the worm? Instrumental variable estimates of educational effects of age of school enty in Germany. IZA discussion paper no. 1827 Sharp C (2002) School starting age: European policy and recent research. In: NFER conference paper, presented at the LGA seminar ‘When Should Our Children Start School?’. LGA Conference Centre, London Staiger D, Stock JH (1997) Instrumental variables regression with weak instruments. Econometrica 65:557–586 Stipek D (2002) At what age should children enter Kindergarten? A question for policy makers and parents. Soc Policy Rep 16:3–16 Stipek D, Byler P (2001) Academic achievement and social behaviors associated with age of entry into kindergarten. Appl Dev Psychol 22:175–189 Stock JH, Wright JH, Yogo M (2002) Survey of weak instruments and weak identification in generalized method of moments. J Bus Econ Stat 4:518–529 Strøm B (2004) Student achievement and birthday effects. Mimeo, Norwegian University of Science and Technology Willer CW, Dyment DA, Sadovnick AD, Rothwell PM, Murray TJ, Ebers GC (2005) Timing of birth and risk of multiple sclerosis: population based study. Br Med J 330:120–123 Zill N, Loomis LS, West J (1997) National household education survey. The elementary school performance and adjustment of children who enter Kindergarten late or repeat Kindergarten: findings from national surveys. In: NCES statistical analysis report 98–097. US Department of Education, National Center for Education Statistics, Washington
Peer effects in Austrian schools Nicole Schneeweis · Rudolf Winter-Ebmer
Accepted: 18 August 2006 / Published online: 27 September 2006 © Springer-Verlag 2006
Abstract This study deals with educational production in Austria and is focused on the impact of schoolmates on students’ academic outcomes. We use PISA 2000 and 2003 data to estimate peer effects for 15 and 16 year old students. School fixed effects are employed to address the potential self-selection of students into schools and peer groups. The estimations show significant positive effects of the peer group on students’ reading achievement, and less so for mathematics. The peer effect in reading is larger for students from less favorable social backgrounds. Furthermore, quantile regressions suggest peer effects in reading to be asymmetric in favor of low-ability students, meaning that students with lower skills benefit more from being exposed to clever peers, whereas those with higher skills do not seem to be affected much. Keywords Education · Peer effects · PISA 1 Introduction Economics of education deals with the explanation of academic achievement of students. Some of the determinants of cognitive development, like individual inputs, parental counselling and “good parenting” can not be influenced much
N. Schneeweis (B) · R. Winter-Ebmer University of Linz, Altenbergerstraße 69, 4040 Linz, Austria e-mail:
[email protected] R. Winter-Ebmer Institute for Advanced Studies, Vienna, Stumpergasse 56, 1060 Vienna, Austria e-mail:
[email protected]
134
N. Schneeweis, R. Winter-Ebmer
by public policy, whereas the use of school resources can. Typical discussions about school resources concern the education and pay of teachers as well as class size effects. Whereas the evidence for the effects of class size is somewhat mixed, many studies suggest that organizational changes in schools can have sizeable effects on academic achievement (e.g., Betts 1998; Wößmann 2003a). Among organizational changes, the composition of classes is internationally one of the most studied topics. The starting point is the assumption that children do not only learn from their teachers but from class- and school-mates, too. The peer group is seen as an important source of motivation and aspiration. One can distinguish a direct and an indirect channel through which learning among students can occur. First, students influence each other directly, by learning in groups, helping one another and talking about concepts, techniques and perspectives. The peer group provides new information and new ways of seeing, therefore, talk among students is one of the activities that can largely enhance cognitive ability. Furthermore, the students are influenced by their peers indirectly, via observational learning. Peers act as important role models, which are seen as powerful means of transmitting attitudes, values and patterns of thought and behavior (Bandura 1986). Besides these psychological approaches, social interactions among students can be seen in terms of constraints, expectations and preferences (Manski 2000). It can be assumed, for example, that the disutility to one student from learning is decreasing with the effort level of the others (Bénabou 1993). Furthermore, one student can produce positive or negative externalities in classroom learning through smart or disruptive behavior (Lazear 2001). In total, it is assumed that the peer group influences the students’ academic outcomes in many different ways. The impact of the peer group on academic achievement—the peer effect—is the main issue in this study. The magnitude and nature of peer effects may affect the optimal organization of schooling. The question whether to track students in different schools and classes or to prefer a more comprehensive education system can perhaps be answered by analyzing social interactions among students. The most important question is: “Should high-ability students be grouped together or should they be spread evenly among schools and classes?” Proponents of a segregative education system argue that such systems make it possible to target the needs of the students more closely, and make it easier for teachers to handle class-management, whereas opponents claim that less gifted students need the presence of clever peers to stimulate learning. Peer effects can be different for students in relation to their social background as well as to their ability. If asymmetric peer effects can be detected in the way that low-ability students are more influenced by their peers than good students, a decrease in educational stratification will increase the total amount of learning, and reshuffling students will be an issue of economic efficiency. If the asymmetry goes the other way around, and high-ability students are more sensitive to peers, tracking will be the optimal policy. A reallocation of students will be a question of distribution only if peer effects turn out to be symmetric.
Peer effects in Austrian schools
135
Recent research on school tracking and segregation assesses the advantages and disadvantages of early segregation in schools. Brunello et al. (2006) found that there is a trade-off between returns to specialization and vocational skills on the labor market, which would call for an early tracking, and the costs of early selection, which are basically costs of erroneously allocating students and less general education as such. Hanushek and Wößmann (2006) investigated the results of six international student assessments in 18–26 countries and found clear evidence that early tracking increases educational inequality. Additionally, the authors found some indications of early tracking to reduce average performance. Other arguments for more integrated schools come from growth studies. Krueger and Kumar (2004) have argued that the European emphasis on early tracking in schools in favor of vocational education might have harmed European growth prospects because more general education is more conducive to the development of and adaptation to technological change. In this study we want to shed some light on the magnitude of peer effects relative to other schooling inputs and find out whether the peers’ influence is symmetric or asymmetric.1 Educational production functions are estimated for Austria with data from PISA 2000 and 2003. In detail, we address the following questions for students in Austrian secondary education: Do peer groups have a measurable effect on student achievement? Is it that students from less favorable home environments and low-ability students are more reliant on their peers? Are academic outcomes affected adversely by social heterogeneity? Are there differences between the subjects reading and mathematics?
2 The identification of peer group effects Manski (1995, 2000) describes a framework for a systematic analysis of social interactions. He states three different hypotheses, why individuals belonging to the same group might tend to behave alike: • Endogenous effects: The probability that an individual behaves in some way is increasing with the presence of this behavior in the group. In our case, student achievement depends positively on the average achievement in the peer group. • Contextual effects: The probability that an individual behaves in some way depends on exogenous background characteristics of the group. In our case, student achievement depends on the socio-economic composition of the peer group. • Correlated effects: Individuals behave in the same way because they have similar background characteristics and face similar environments. In our case, student achievement is correlated within the group because students 1 We focus on cognitive development of students only, other aspects of education, like social learning, are disregarded. Arguments can be made that exposure to students from different backgrounds could improve social skills in particular.
136
N. Schneeweis, R. Winter-Ebmer
come from similar home environments and are instructed by the same teachers in the same schools. Endogenous and contextual effects are driven by social interactions, whereas correlated effects are a non-social phenomenon. It is important to distinguish between endogenous and contextual effects. Positive contextual effects mean that an individual student i’s achievement will rise if a classmate j with a performance furthering background arrives. In the case of endogenous effects, the interaction is not completed yet; the actual increase in achievement of student i will further the achievement of student j—there are repercussions, a multiplier effect. For social and educational policy it is important to know whether, by individually enhancing the cognitive performance of one single student in class, the achievement of the classmates is promoted automatically. Unfortunately, contextual and endogenous effects cannot be separated empirically because background characteristics of student i are determining student i’s achievement: a problem of multicollinearity. Moreover, the investigation of endogenous effects faces a classic simultaneity problem because mean achievement of the group is taken as a regressor but achievement in the group itself is influenced by the achievement of the student in question; the reflection problem according to Manski (2000). It is very difficult to overcome this problem without resorting to very strong restrictions. In agreement with the literature, we only estimate contextual effects to circumvent these problems. Another econometric problem concerns self-selection of students into schools and peer groups. If students with higher (unobserved) abilities choose better schools and peer groups, the estimated peer effect will be biased upwards. Our identification strategy is twofold: first, we include rich information on the students’ family backgrounds and second, we introduce school fixed effects to get rid of sorting problems associated with different qualities of schools, different neighborhoods and parental backgrounds. We concentrate on the variation of peer quality within schools and compare students from grade 9 to those of grade 10 within each school. We have to assume that students from grade 9 and 10 do not differ in school quality and that they do not select themselves into the different grades. These assumptions are very reasonable.2 Similar techniques were used by Schindler-Rangvid (2003), McEwan (2003) and Ammermüller and Pischke (2006). Schindler-Rangvid estimated peer effects in education for Danish students with PISA data. Her identification strategy hinges on the non-selective Danish school system and the availability of additional register data, which reduces the omitted variables bias. McEwan (2003) and Ammermüller and Pischke (2006) used school fixed effects and compared students in different classes to overcome the sorting problem. This
2 Another strategy would be to use a value-added model where students are observed over time and person fixed effects can be adopted. This is not possible with the PISA data. Apart from this, the value-added models run into difficulties when the student body stays constant over time, resulting in too little variation in peer composition over time. See Gibbons and Telhaj (2006) for a value-added model in which the authors can observe students changing schools in the observation period.
Peer effects in Austrian schools
137
is a good strategy, but requires rather strong assumption that students are not sorted in different classes according to their abilities. A number of other empirical studies have been carried out to measure peer effects in primary and secondary education (e.g., Hoxby 2000; Levin 2001; Fertig 2003; Hanushek et al. 2003; Robertson and Symons 2003; Angrist and Lang 2004; Betts and Zau 2004; Vigdor and Nechyba 2004; Gibbons and Telhaj 2006) as well as in higher education (e.g., Sacerdote 2001; Arcidiacono and Nicholson 2005; Winston and Zimmerman 2006). Most of the studies found sizeable positive effects of school- or class-mates on student achievement, whereat these effects were found to be somewhat stronger at class level. Some of the studies raised the question of whether peer effects are asymmetric. Schindler-Rangvid (2003) found peer effects to be stronger for weaker students in Denmark, Levin (2001) identified stronger effects for weaker students in the Netherlands and Sacerdote (2001) and Winston and Zimmerman (2006) found some evidence for non-linearities, mostly in favor of low-ability students in US higher education. Concerning heterogeneity in classrooms and schools the results are ambiguous: Schindler-Rangvid (2003) found no significant effects of social heterogeneity in Denmark, Fertig (2003) found some negative impact of ability dispersion for the USA and Vigdor and Nechyba (2004) found positive effects of ability dispersion for students in North Carolina. Peer effects were also investigated in other fields of research, like teenage behavior (Kooreman 2006; Soetevent and Kooreman 2006), juvenile delinquency (Bayer et al. 2004) or youth smoking (Krauth 2005; Eisenberg 2004). An interesting experiment on peer effects in work productivity was carried out by Falk and Ichino (2006). The authors found significant peer effects and, furthermore, low productivity workers to be more sensitive to the behavior of co-workers. 3 Empirical framework The empirical analysis is based on data from PISA 2000 and 2003.3 The two waves of the Program of International Student Assessment have been conducted by the OECD in a number of countries. About 15–16 year old students, reaching the end of compulsory schooling, were tested in reading, mathematics and science, and additionally, detailed background information about students and schools was collected. In total, about 4,600 Austrian students coming from about 200 schools were assessed for PISA in each wave. The major domain in the PISA 2000 wave was reading literacy, therefore, two-thirds of all test questions focused on reading topics and all participating students were assessed in reading. Mathematics and science were subdomains, and not every student answered a mathematics or science question. In 2003, the major domain was 3 For information on PISA achievement in Austria and across countries, see Haider et al. (2001) and OECD (2001, 2004), and for a detailed description of the survey design and sampling, see OECD (2002, 2005).
138
N. Schneeweis, R. Winter-Ebmer
mathematics and all students were assessed in mathematics. Again, reading and science questions were not administered to every student. Therefore, we focus on peer effects in reading with the PISA 2000 data and in mathematics with the data from PISA 2003. In Austria, lower and upper secondary education are both organized in a segregative way. First, students at the age of ten have to choose a school type and school. They can attend a ‘Hauptschule’ or a ‘Gymnasium’ (higher general school), whereas the latter school type admits only students with better academic records, employs teachers with higher qualification, pays higher wages and offers an academically preferable curriculum. At the age of 14, the students and their parents, again, choose a school type and school. There are four broad alternatives: leaving the school system and start working, attending an intermediate or higher vocational school or a higher general school. Universities and a number of post-secondary educations are restricted to students, who finished a higher vocational or general school. If one decides for the first alternative, the student has to attend a 1 year pre-vocational school; thereafter an apprenticeship can be started which is accompanied by a vocational school. Within the broad alternatives: intermediate vocational, higher vocational and higher general schools, a number of different school types exist, depending on the vocational or academic orientation. In total, the Austrian PISA students attend 17 different school types. Moreover, the importance of private schools is limited in Austria and only about 10% of all PISA students attend private secondary schools. We estimate peer effects using a standard model of educational production, in which the outcome of education, the PISA score, is estimated as a function of the students’ individual characteristics, family background indicators and peer group attributes. We show two different specifications. In the first one, we include school resources and school type dummies—this corresponds to a school type fixed effects model, where we assume that the allocation of students among the school types describes the selection process perfectly. The second specification, our preferred one, includes school fixed effects and can be written as Yisg = β0 + β1 Xisg + β2 P−isg + µs + εisg , where Yisg is educational outcome of student i in school s in grade g, Xisg is a vector of individual and family characteristics, P−isg represents mean peer characteristics without the contribution of student i, µs is a school fixed effect and εisg is the unobserved error term. A critical point in measuring the influence of the peer group is the fact that there is no information about the “real” reference group of a student. As we cannot directly identify the friends of the student in question, we have to assume that students are significantly influenced by their classmates, keeping in mind that students spend a relatively big part of their time at school. The studies of Kooreman (2006) and Soetevent and Kooreman (2006) indicate that classmates are important in determining the behavior of high-school teenagers like
Peer effects in Austrian schools
139
smoking, drinking and truancy. Peers from the same school (class) should be even more important for directly school-related behavior like learning. The PISA data do not contain information about classes, but about schools and grades. Thus, the peer group in our study is defined as students attending the same school and grade. We expect the estimated peer effects to be smaller in our study, compared to studies where students can be matched directly with their classmates. Betts and Zau (2004) and Vigdor and Nechyba (2004) showed that the analysis of peer effects at class level yields stronger effects compared to the grade level. In selecting the samples for the study from the whole PISA samples, we focused on several criteria. First, students from two school types were dropped: those from schools for students with special needs as well as those attending vocational schools (‘Berufsschulen’). Schools for students with special needs are not comparable to other schools in terms of curriculum and ability of students. Vocational schools are part time schools for apprentices. The students approximately spend one full day a week at school in addition to learn their job by working in a firm. We suppose to find the real reference group of those youths more likely in the firms they are employed or in their neighborhoods, rather than at school. Peer groups are based on students attending the same schools and grades, thus, students with missing grade values had to be excluded (about 2.6% in the reading and 0.16% in the math sample). To represent peer quality, an indicator of the students’ socio-economic background is used and students with missing values of this major explanatory variable are dropped (1% in the reading and 2.6% in the math sample). Since peer quality is represented by mean characteristics of a student’s peers, we restricted the samples to peer groups of at least eight students. Furthermore, the school fixed effects estimation requires two peer groups per school, thus, schools with only one peer group were excluded from the samples. The size of the peer group varies between 8 and 27 students, with the mean peer group consisting of 16 students. To account for the possibility of non-random missing values in control variables and to keep up sample size, missing dummies are included in all regressions and the missing explanatory variables are set to zero. In the end, we remain with a reading sample of 2,529 students and a mathematics sample of 2,824 students. The samples are still representative for secondary education students in Austria, not for the whole student population but for those in intermediate and higher schools. Table 1 gives a detailed description of the used variables as well as summary statistics. The dependent variables are plausible values of student achievement in reading and mathematics. As each PISA test consists of a battery of questions with different difficulty levels and the students answered different test questions, the PISA team constructed estimates (plausible values) for the students’ actual score, using the observed item responses (OECD 2002, 2005). Peer quality is modeled with the peers’ socio-economic index of occupational status. The socio-economic status was derived from students’ reports on parental occupations and ranges from 16 to 90. Lower values indicate a lower index
140
N. Schneeweis, R. Winter-Ebmer
Table 1 Summary statistics Variable
Reading score Math score Individual characteristics: Female Grade Family structure Nuclear family
Single parent family
Other family Ethnicity Ethnic Austrian Immigrant Parents immigrated
Family background: Mother education Mother no sec education
Mother low sec education
Mother up sec education
Mother ‘Matura’
Mother tertiary education
Socio-economic status
Description
First plausible value of reading test score First plausible value of math test score
Reading
Mathematics
Mean
SD*
541.06
73.75
Mean
SD
544.76
81.18
Student is female Grade at school
0.59 9.52
0.57 9.50
Student lives with mother and father (or guardians) Student lives with mother or father (or one guardian) Student lives with others
0.87
0.84
0.12
0.15
0.01
0.02
Student is ethnic Austrian Student was not born in Austria Mother, father or both not born in Austria
0.87
0.86
0.05
0.07
0.09
0.08
Mother did not attend school or finished elementary school only Mother finished lower secondary education Mother finished upper secondary education aimed at entering the labor market Mother finished upper secondary education aimed at entering post -secondary or tertiary education Mother finished post -secondary or tertiary education Highest international socio-economic index of occupational status reached by a parent, lower values indicate a lower status
0.03
0.02
0.20
0.08
0.48
0.45
0.08
0.20
0.21
0.25
51.79
13.67
50.39
16.35
141
Peer effects in Austrian schools Table 1 continued Variable
Books at home Educational resources
Parent jobless
Parents work fulltime
School characteristics: School size Weeks per year Urban school
Students/teacher Teacher qualification
Regular testing
Promotion of gifted
Promotion of low achievers
Description
Reading
Mathematics
Mean
SD*
Mean
SD
Number of books at home Index of home educational resources (a dictionary, a quiet place to study, text books, calculators), lower values indicate poorer resources Student’s father is looking for a job (if father is missing, student’s mother is drawn on) Both parents work fulltime or one parent works fulltime if the other is missing
222.91
226.60
208.12
221.16
0.36
0.72
0.29
0.69
Total enrollment in school School weeks per year School is located in a city with more than 100,000 residents Student teacher ratio Fraction of teachers with a university degree in pedagogy Students are assessed by standardized and/or teacher-developed tests four or more times a year (three in the mathematics sample) School provides extra courses for gifted students (enrichment math in math sample) School provides special training in language and/or special courses in study skills for low achievers (remedial math in math sample)
659.68
506.88
686.55
563.28
38.96
2.52
39.20
2.15
0.01
0.02
0.35
0.36
0.31
0.28
9.68
2.03
9.76
1.94
0.94
0.15
0.71
0.27
0.86
0.99
0.48
0.19
0.78
0.72
142
N. Schneeweis, R. Winter-Ebmer
Table 1 continued Variable
Description
Reading Mean
Lack of material
Teacher shortage
Teacher behavior
Peer characteristics: Socio-economic status peers
Status heterogeneity
School types: Higher general schools GYM RGYM ORG Higher vocational schools BHSt BHSk BHSw BHSl ALE
There is (to some extent) lack of instructional material at school There is (little, somewhat, a lot) shortage or inadequacy of teachers at school (math teachers in math sample) Index of principal’s view on teacher behavior (teachers’ expectations, student -teacher relations, meeting of students’ needs, teacher absenteeism, resisting change, too strict, encouragement of students to achieve their potential), lower values indicate poorer climate Mean socioeconomic status in the peer group SD of socioeconomic status in the peer group ‘Gymnasien’ humanistic orientation scientific orientation scientific orientation (only grades 9–12) ‘Berufsbildende Höhere Schulen’ technical, art and trades business domestic science and commercial agriculture, forestry teacher training
Mathematics SD*
Mean
0.10
0.26
0.23
0.18
SD
−0.17
0.80
0.29
0.90
51.79
6.31
50.39
8.30
12.26
2.90
14.30
2.98
0.11 0.07 0.08
0.08 0.07 0.08
0.17
0.18
0.17 0.11
0.14 0.09
0.02
0.02
0.04
0.02
143
Peer effects in Austrian schools Table 1 continued Variable
Description
Reading Mean
Intermediate vocational schools BMSt BMSk BMSw BMSl
‘Berufsbildende Mittlere Schulen’ technical, art and trades business domestic science and commercial agriculture, forestry
Mathematics SD*
Mean
0.05
0.03
0.04 0.08
0.02 0.04
0.04
0.05
Small schools Moderately small Very small
– –
Number of schools Number of students
86 2,529
SD
0.15 0.03 95 2,824
*No standard deviation is reported for binary variables
of socio-economic status. The variable is a continuous measure of occupational stratification and is based on a ranking of occupations that maximizes the indirect effect of education on income, while minimizing the direct effect, net on age (Ganzeboom et al. 1992). This index is better suitable for the Austrian situation than the educational level of the parents, because PISA provides standardized educational categories (ISCED), which do not fit the Austrian education system well, and valuable information is lost in this compression. Moreover, the socioeconomic index of occupational status shows a larger variation than education itself because it incorporates also the development of the parents after school. In a first step, we use survey regressions to estimate educational production functions and to measure the mean effect of peer quality on students’ academic outcomes. The survey estimation technique is used because it takes into account that the sample is not random, but the product of a complex stratified sampling procedure. To assure representativeness, three design effects are considered. First, student weights are employed accounting for differences in sampling probabilities. Second, the methodology takes into account that variations among students from the same school may be smaller than between schools by estimating cluster robust standard errors. And third, sampling has been done independently across strata (school types), therefore, the strata are statistically independent and can be analyzed as such. Survey regression, like OLS, is designed to estimate mean effects; hence, the effects of explanatory variables for the average student. By estimating peer effects with quantile regressions, one can estimate different effects for different students on the conditional test score distribution (Koenker and Bassett 1978). All observations are used and the effects for different quantiles are estimated by weighting the residuals differently, depending on the quantile in question. Robustness to potential heteroscedasticity can be achieved by bootstrapping
144
N. Schneeweis, R. Winter-Ebmer
methods, in which the standard errors are obtained by resampling the data. We employed 200 bootstrap replications. 4 Results The following section describes the empirical results. Section 4.1 deals with mean peer effects and gives an account of the basic model used in all further estimations. In Sect. 4.2, the hypotheses that students with a lower socio-economic background and low ability students are more reliant on their peers are tested. Furthermore, the question whether students are adversely affected by social heterogeneity in the peer group is investigated. 4.1 Mean peer effects Table 2 gives results for mean peer effects in reading and mathematics. Columns 1 and 3 show survey regression results including individual, family and school characteristics as well as 14 indicators for school types, specification (1). Columns 2 and 4 include school fixed effects instead of school types and school resources, specification (2). The fixed effects estimates should be purged from selection (school choice) because the estimates are identified only by variation in peer quality within schools. The results of specification (1) show significant peer effects for both, reading and mathematics. These effects are not much different if the indicators for school resources are excluded from the regressions (results not reported here). Once we include school fixed effects, the coefficients for mean socio-economic status of peers get much smaller (in particular for mathematics) and lose significance. This result points towards the importance of selection effects: abilities of students differ between schools of the same type. Of course, by differencing out school effects, potential measurement errors—arising from an imprecise measurement of parental socio-economic background and the fact that we do not observe all peers in a classroom—lead to some attenuation bias.4 Besides the peer group, the effects of the other variables should also be mentioned. The majority of individual characteristics show the expected effects. Females perform better in reading, males perform better in mathematics. Grade is an important predictor of achievement; students attending the tenth grade perform better than students in the ninth. Living in a single parent family has not the expected negative effect. Compared to nuclear families, where students live with both parents, the estimates suggest that these students perform a bit better in reading. However, the statistical significance is too low to draw any conclusions. Immigrants and second-generation immigrants perform consider4 Ammermüller and Pischke (2006), using a similar fixed effects strategy, argue that fixed effect
estimates suffer from a substantial attenuation bias; if they correct for the measurement error by using an instrumental variables strategy, they find fixed effects results which are very close to the underlying OLS results.
145
Peer effects in Austrian schools Table 2 Mean peer effects Reading Variable Socio-economic status peers
(1)
1.266 (0.321)*** Female 16.979 (3.379)*** Grade 25.997 (2.527)*** Nuclear family (reference category) Single parent family 6.113 (5.404) Other family 1.496 (13.853) Ethnic Austrian (reference category) Immigrant −32.441 (8.388)*** Parents immigrated −14.900 (5.859)** Mother no sec education (reference category) Mother low sec education 0.413 (5.917) Mother up sec education 9.464 (6.055) Mother Matura 11.161 (6.755) Mother tertiary education 7.727 (6.809) Socio-economic status 0.236 (0.117)** Books at home 0.038 (0.007)*** Educational resources 3.201 (1.918)* Parent jobless −42.305 (12.608)*** Parents work fulltime −6.402 (2.866)** School size 0.011 (0.006)** Weeks per year 2.662 (0.865)*** Urban school −2.223 (6.612) Students/teacher 4.414 (8.215) Students/teacher squared −0.348 (0.386) Teacher qualification 8.280 (12.835) Regular testing −10.730 (6.618) Promotion of gifted −0.302 (4.339)
Mathematics (2)
(1)
(2)
1.713 (0.273)*** −22.799 (3.410)*** 31.074 (2.241)***
−0.140 (0.309) −21.386 (3.410)*** 30.756 (1.794)***
−1.507 (3.299) −25.200 (11.028)**
−1.023 (3.215) −25.212 (11.883)**
−25.973 (8.316)*** −11.817 (5.598)**
−27.380 (6.224)*** −26.320 (4.071)***
−26.649 (6.015)*** −23.457 (4.350)***
1.131 (5.822) 9.162 (5.838) 10.273 (6.678) 7.718 (6.721) 0.207 (0.117)* 0.034 (0.006)*** 2.764 (1.982) −36.297 (12.657)*** −5.300 (2.881)*
9.491 (7.599) 9.545 (6.973) 14.427 (7.468)* 2.742 (7.817) 0.217 (0.092)** 0.050 (0.007)*** −1.056 (1.952) 1.813 (10.587) −4.038 (2.708) 0.004 (0.004) −0.332 (1.091) 6.794 (5.500) −9.916 (8.065) 0.739 (0.389)* 34.232 (11.215)*** 102.022 (24.291)*** 15.576 (8.206)*
9.312 (7.447) 8.406 (7.060) 15.304 (7.198)** 3.190 (7.988) 0.094 (0.092) 0.045 (0.006)*** −1.524 (1.899) 3.470 (10.996) −2.917 (2.595)
0.484 (0.440) 16.141 (3.563)*** 26.746 (2.450)*** 6.831 (5.434) 0.078 (13.460)
146
N. Schneeweis, R. Winter-Ebmer
Table 2 continued Reading Variable Promotion of low achievers Lack of material Teacher shortage Teacher behavior GYM (reference category) RGYM ORG BHSt BHSk BHSw BHSl ALE BMSt BMSk BMSw BMSl
(1)
Mathematics (2)
−0.109 (5.199) 5.434 (6.306) 8.463 (6.434) 7.197 (3.417)** −28.822 (11.684)** −34.229 (14.712)** −33.489 (9.119)*** −9.648 (8.708) −34.861 (10.201)*** −6.455 (17.044) −23.556 (10.668)** −62.814 (12.827)*** −47.870 (14.980)*** −68.712 (13.542)*** −77.057 (17.329)***
Very small
School fixed effects Number of observations R2
104.889 (65.095) No 2,529 0.303
(2)
−7.383 (5.590) −1.612 (6.626) −5.946 (5.912) 2.881 (3.000)
Moderately small
Constant
(1)
258.982 (34.330)*** Yes 2,529 0.360
−11.508 (10.260) −1.900 (9.538) 50.305 (12.412)*** 1.826 (8.883) −19.901 (17.416) 40.196 (12.869)*** −8.691 (28.033) −6.790 (14.234) −142.972 (12.222)*** −36.175 (10.743)*** −15.086 (12.350) −23.063 (9.195)** −43.309 (21.520)** 65.737 (70.096) No 2,824 0.414
218.338 (22.739)*** Yes 2,824 0.472
Survey regressions, standard errors in parentheses, dummies for missing values of some explanatory variables included ***, ** and * indicate a statistical significance at 1, 5 and 10%
ably worse than ethnic Austrians. The coefficients for immigrants are comparable in magnitude with those of grade. One can say that students with foreign background are 1 year behind the others.5 5 See Schneeweis (2006) for an analysis of native-migrant achievement differentials across countries.
Peer effects in Austrian schools
147
The students’ family background indicators show important effects, especially for reading skills. The family’s socio-economic status, the number of books at home and the parents’ labor market status have the expected effects. The mother’s education level is a common predictor of educational achievement, but once corrected for socio-economic status, the variable loses explanatory power. Specifications with father’s education are even less significant. Compared to individual characteristics and family background, school resources are less important. In reading, some effects are found for school size, instructional weeks per year and teacher behavior. In mathematics, the teacher qualification indicator, regular testing of students and promotion activities for gifted students seem to further the proficiency of students. The number of students per teacher has no significant effect.6 Due to the possibility that better students might attend high quality schools, which are equipped with better resources, the estimated effects of school characteristics have to be interpreted with caution. School type dummies are highly significant and influence academic achievement considerably. We found that students in the intermediate vocational schools have much lower reading and mathematics scores. On average, they perform by about 64 reading points and by about 50 mathematics points worse, compared to students in higher general schools. Altogether, the segregative school system of Austria is reflected in the large and statistically significant effects of school types on academic outcomes. Implementing school type fixed effects alone when studying peer effects should, therefore, reduce the self-selection bias considerably. To sum up, peer effects in reading and mathematics can be detected in simple survey regressions, but once school fixed effects are introduced mean peer effects cease to be significant.
4.2 Asymmetric peer effects There is no firm evidence on peer effects for the average individual. Still it is possible that significant effects for subgroups of students in a class are hidden by the general picture. The possibility of asymmetric peer effects is very interesting from a policy point of view because their existence is often associated with a discussion of regrouping students in schools or classrooms. While peer effects as such give no room for policy conclusions in the tracking debate, the following questions are important: For whom does the peer group matter? Are students from less supportive families more influenced by their peers? Do clever students or weaker students profit more from being confronted with clever peers? To address these issues two hypotheses are tested:
6 For a detailed discussion on class size effects see Hanushek (1997, 1999, 2002), Krueger and Whitmore (2001) and Krueger (2003).
148
N. Schneeweis, R. Winter-Ebmer
Table 3 Asymmetric peer effects with respect to family background (A) Variable
Socio-economic status peers Own status * status peers School type effects School fixed effects Number of observations
Reading
Mathematics
(1)
(2)
(1)
(2)
2.622 (0.889)*** −0.026 (0.015)* Yes No 2,529
1.987 (0.965)** −0.027 (0.015)* No Yes 2,529
1.568 (0.601)** 0.003 (0.009) Yes No 2,824
−0.852 (0.610) 0.013 (0.010) No Yes 2,824
Survey regressions, standard errors in parentheses, dummies for missing values of some explanatory variables included, individual characteristics and family background included; school characteristics included in (1) ***, ** and * indicate a statistical significance at 1, 5 and 10%
1. Students from less favorable socio-economic backgrounds are more dependent on others in their learning and, therefore, more influenced by their peer group. 2. Low achieving students with a larger cognitive distance to their peers profit more from good students because there is more to be learned when levels are low. On the other hand, low achieving students could be less affected by clever peers because observational learning from peers as well as a healthy competitive learning climate might require similar cognitive abilities.7 To test the first hypothesis, we estimate the two model specifications by interacting the mean socio-economic status of the peer group with the student’s own socio-economic background. The estimated coefficients are presented in Table 3. For reading, we do find statistically significant asymmetric effects: the peer effect is highest for students with low socio-economic background and gets smaller for those with a more favorable background. The results do not differ much between the two specifications. These important interactions explain why we do not find significant peer effects with the simple specifications in Table 2. The quantitative peer group effect of the school fixed effects model can be interpreted as follows: moving a student to a new peer group with a quality of one standard deviation higher, all else equal, will raise the students’ reading achievement. A student with a lower socio-economic index (25th percentile) is affected by this move with an increase in achievement by 5.2 points. This amounts to 7.1% of the standard deviation of the reading scores. A student from a more favorable background (75th percentile) will be influenced with an increase of 3.3% standard deviations. The effects in specification (1) are a bit larger. 7 Asymmetries can also arise from psychological reasons, like concerns for relative position, status,
envy or a preference to conform to others. While the first arguments call for aspirations geared toward the top, conformists are oriented toward the average; they would not react to a mean-preserving increase in the spread of scores.
Peer effects in Austrian schools
149
In contrast to this, there are no significant effects for mathematics: while a mean peer effects is visible in specification (1), no asymmetries can be detected. In the fixed effects version, both effects are very small and insignificant. Why is there such a difference between reading and mathematics? Other studies also confirm that peer effects in secondary education are more sizable in reading as compared to mathematics. The literature suggests that peer effects in mathematics are more important in early grades of education, while in later grades the peer effect in language gets stronger. Robertson and Symons (2003) found 11-year-old British children being more influenced by their peers in mathematics. Levin (2001) found significant effects in grade 4, only in mathematics, higher peer effects in mathematics in grade 6 and a kind of convergence to reading until grade 8. Vigdor and Nechyba (2004) used panel data of North Carolina and found higher peer effects in mathematics for fifth graders. The picture turns around in higher grades, the effects in mathematics decline and academic achievement of eighth grade students is about 1.6 times stronger influenced by peers in reading than in mathematics. We observe a similar picture if we estimate peer effects separately for the ninth and tenth grade using interactions: the effect in mathematics is a bit higher in grade 9 and the opposite is true for reading. Moreover, in the Austrian situation, mobility of students could play a role: if several schools of the same type are available in a city, the students (or their parents) have to decide which one to choose. As public rankings of schools are rare in Austria, parents and students must take their assessment of school quality from hearsay. It is possible that weaker students or those from a lower socio-economic background prefer schools where teachers are considered to be less demanding and exams are easier to pass: this could apply more in the case of mathematics (and science) because failure rates in these fields are considerably higher. This would explain why the step from a survey regression with school types to school fixed effects has practically no consequences for peer effects in reading but substantial ones for the case of mathematics. There are some other indications that support this interpretation: (i) school fixed effects take away the significant impact of students’ own socio-economic status (compare columns 3 and 4) and (ii) school variables like teacher qualification, regular testing and promotion of gifted students are more important in explaining mathematics test scores as compared to those in reading. All these indicators point toward a higher importance of school effects in mathematics education. Furthermore, we checked the standard deviation of the school dummies and found a higher one in mathematics. Within the different school types, the standard deviations of the school effects are also higher in mathematics. A related non-parametric strategy is to divide students into two categories, derived from their own family background: top and bottom students. Top students are those with a higher socio-economic index, compared to the average in the peer group, and bottom students are on or below the average. We allow the peer effect to be different for each category. Table 4 shows the estimated peer effects which corroborate our results from above. In reading, we get lower effects for top students, which lose significance once fixed effects are intro-
150
N. Schneeweis, R. Winter-Ebmer
duced. For students from lower socio-economic background, peer effects are significantly higher, both in specification (1) and (2). To demonstrate the different magnitudes, imagine an increase in peer quality of 6.3 points (one standard deviation of the mean socio-economic status in the peer group). A student, located at the bottom of the distribution, will benefit with an increase of 11 reading points according to specification (1) and 6 points, according to specification (2). A student with a status, higher than the mean, profits with seven and two points, respectively. Thus, the peer effect is approximately twice as high for low family background students. In mathematics, peer effects in specification (1) are somewhat higher for the top group, but both effects break down in the fixed effects model. So far, we assumed that students are influenced by the mean characteristics of their fellow students. Alternatively, we might assume that top or bottom peers are more influential. We carried out some further estimations (not reported in the tables) that allow not only different effects for different receiving students, but also different effects from different sending students. The question is whether different students are more influenced by the high achievers in the peer group or the low achievers? In other words: Are the relevant peers for high achievers the other high achievers or the low achievers? The estimations yield significant effects only for specification (1) in both, reading and mathematics. We found that top students are positively influenced if the bottom part of the peer group Table 4 Asymmetric peer effects with respect to family background (B) Variable
Reading (1)
(2)
School type effects School fixed effects Number of observations
1.068 (0.350)*** 1.697 (0.450)*** Yes No 2,529
0.351 (0.488) 0.943 (0.506)* No Yes 2,529
Variable
Mathematics
Socio-economic status peers Top students Bottom students
Socio-economic status peers Top students Bottom students School type effects School fixed effects Number of observations
(1)
(2)
2.030 (0.319)*** 1.651 (0.326)*** Yes No 2,824
0.124 (0.348) −0.208 (0.322) No Yes 2,824
Survey regressions, standard errors in parentheses, dummies for missing values of some explanatory variables included, individual characteristics and family background included; school characteristics included in (1) ***, ** and * indicate a statistical significance at 1, 5 and 10%
Peer effects in Austrian schools
151
shows a higher quality. For bottom students, the quality of the top part seems to be more important. These are interesting results and point to the importance of tutoring activities between high and low achievers in these classes: both the tutor and the pupil can learn in the process. Recent research in behavioral economics has shown that individuals are heterogeneous with respect to cooperative behavior and the willingness to cooperate often depends on the cooperativeness of others (Gächter and Thöni 2005). In our model of social interactions, we can assume that peer effects depend on the cooperativeness in the class. The PISA data provide an index of cooperative learning behavior of the students, which was derived from a number of questions describing the student’s attitude toward learning with others (e.g., I like to work with others, help other people, think it’s helpful to put together everyones’ ideas.). We interacted the peer effect with the mean index of cooperative behavior in the peer group and found higher peer effects in reading in groups with a higher level of cooperative learning, both with and without school fixed effects. The mathematics estimations yield no significant results. The second hypothesis is tested with quantile regressions, allowing peer effects to vary for students with different cognitive abilities, according to the PISA score. Estimates are reported for the 15th, the 25th, the 50th, the 75th and the 85th percentiles of the conditional test score distribution. Table 5 shows the estimated effects for each quantile. In reading, it appears that students in the lower part of the ability distribution are more affected by their peers than those students with abilities above the median; this picture is confirmed once fixed effects are introduced, but the statistical significance is reduced. Again, interactions with own socio-economic status show negative effects: peer effects are highest for low and median achievers who come from a lower parental background. In mathematics, the results are less revealing: for students with test scores below the 85th percentile, peer effects are fairly similar, but these effects disappear completely in the fixed effects specification. In terms of public policy, one has to be cautious: there are some indications that a more equal allocation of high-ability students across schools may yield a higher level of achievement in reading. Low ability students as well as those with less favorable social backgrounds seem to profit more. These results have to be taken with care, because sorting across schools seems to play some role, in particular concerning the mathematics results. Before starting to change the socio-economic composition of schools, you might want to check if a different composition of the student body could harm fellow students. This could be the case if there are adverse effects of social heterogeneity in schools and classes. Students may be influenced not only by the mean level of peer quality but by the diversity of the peers as well. Thus, the effect of social heterogeneity on academic achievement is tested by introducing the standard deviation of the peer quality variable. Table 6 shows coefficients from quantile regressions. In reading, there are no significant negative coefficients of status heterogeneity to be found, neither with nor without school fixed effects. In mathematics, the regressions show some negative effects for the 15th percentile and some positive effects for the 85th percentile. So it seems that more diversity in mathematics
152
N. Schneeweis, R. Winter-Ebmer
Table 5 Asymmetric peer effects with respect to ability and family background Quantile 0.15 Reading (1) Socio-economic status peers Own status × status peers Reading (2) Socio-economic status peers Own status × status peers Mathematics (1) Socio-economic status peers Own status × status peers Mathematics (2) Socio-economic status peers Own status × status peers
0.25
0.50
0.75
0.85
2.932 (1.717)* −0.027 (0.031)
3.744 (1.493)** −0.042 (0.026)*
2.979 (1.118)*** −0.033 (0.020)*
1.974 (1.188)* −0.019 (0.021)
0.995 (1.368) −0.004 (0.023)
1.383 (1.807) −0.012 (0.029)
2.502 (1.702) −0.035 (0.027)
2.602 (1.352)* −0.039 (0.023)*
0.945 (1.160) −0.017 (0.020)
0.678 (1.193) −0.029 (0.022)
1.448 (0.727)** −0.008 (0.014)
1.290 (0.758)* −0.000 (0.013)
−0.452 (1.122) 0.001 (0.016)
−0.913 (1.026) 0.010 (0.015)
1.582 (0.586)*** 0.003 (0.010) −0.638 (0.828) 0.011 (0.012)
1.461 (0.628)** 0.013 (0.012) −1.465 (1.023) 0.026 (0.016)
0.806 (0.815) 0.022 (0.015) −1.255 (1.123) 0.024 (0.018)
Quantile regressions, bootstrapped standard errors in parentheses, dummies for missing values of some explanatory variables included, individual characteristics, family background included, school types and school characteristics included in (1), school fixed effects in (2) ***, ** and * indicate a statistical significance at 1, 5 and 10%
education could be good for the bright students, but bad for the low achievers. These are somewhat contradictory results: peer effects as such are higher and more significant in reading, but more diversity in mathematics education might lead to some negative effects. A possible solution—which has been exercised in some Austrian schools—is to integrate schools and classrooms but have working groups with differing levels in some subjects, like mathematics. 5 Conclusion In this paper we investigate peer effects in Austrian schools using data from PISA 2000 and 2003. Estimating peer effects is difficult mainly due to self-selection of students into schools and peer groups. As the Austrian school system is selecting secondary education students into many different school types, we use two specifications, school type fixed effects and school fixed effects. We found a considerable positive peer effect in reading achievement, which is diminishing in the students’ own socio-economic background. Thus, students from less beneficial family backgrounds can achieve higher returns from a favorable peer group than others. This phenomenon cannot be observed in
153
Peer effects in Austrian schools Table 6 Asymmetric peer effects with respect to ability and heterogeneity Quantile 0.15 Reading (1) Socio-economic status peers Status heterogeneity Reading (2) Socio-economic status peers Status heterogeneity Mathematics (1) Socio-economic status peers Status heterogeneity
0.25
0.50
1.628 (0.677)** −1.123 (1.051)
1.458 1.330 (0.560)*** (0.485)*** −0.724 −0.455 (0.838) (0.741)
0.826 (0.917) −0.046 (1.438)
0.632 (0.846) −2.022 (1.292)
1.226 1.292 (0.340)*** (0.322)*** −0.730 −0.022 (0.726) (0.735)
Mathematics (2) Socio-economic status peers −0.146 (0.604) Status heterogeneity −1.714 (0.972)*
−0.527 (0.563) −0.950 (0.882)
0.583 (0.687) 0.250 (1.041) 1.735 (0.265)*** −0.066 (0.585) 0.051 (0.474) −0.718 (0.873)
0.75
0.901 (0.540)* 0.940 (0.840) 0.081 (0.811) −0.123 (1.013) 1.953 (0.304)*** 1.113 (0.662)* −0.310 (0.627) 0.728 (0.898)
0.85
0.459 (0.599) 1.364 (0.946) −0.565 (0.766) 0.231 (1.260) 1.717 (0.352)*** 1.685 (0.683)** 0.098 (0.605) 1.705 (0.920)*
Quantile regressions, bootstrapped standard errors in parentheses, dummies for missing values of some explanatory variables included, individual characteristics, family background included, school types and school characteristics included in (1), school fixed effects in (2) ***, ** and * indicate a statistical significance at 1, 5 and 10%
mathematics: here selection of students among different schools seems to play a more important role. Moreover, peer effects in reading seem to be asymmetric in favor of low ability students, meaning that the returns to peers are higher for these students. There is some reason to believe, that we are measuring a lower bound for peer effects: (i) we do observe students only in grades but not in classes, therefore, the used peer concept—students in the same grade, but not necessarily in the same class—will be too broad; (ii) measurement error in the fixed effects estimations would lead to an attenuation bias, squeezing down the peer effects toward zero. Peer effects are of political interest because they serve as an argument for reallocating students into different schools or environments: weak students would profit if they were in the same class with high-performing students. In order to be efficiency-enhancing, in the sense of increasing cognitive achievement of students, two conditions have to be met. First, peer effects should be higher for low-skilled students as compared to high-skilled ones, and second, higher heterogeneity in schools should have no detrimental effect on average learning in the group. Our results do not give a clear message in favor of reallocating students. There is some evidence for peer effects in reading,
154
N. Schneeweis, R. Winter-Ebmer
effects which are, in fact, higher for low-performing students and those from less well-off socio-economic parental backgrounds. On the other hand, more heterogeneity of the student body seems to have some costs for low achieving students in mathematics. Perhaps a solution is to integrate students more, but allow smaller working groups at different levels for specific subjects, like mathematics and science. Moreover, the Austrian school system is highly stratified in school types and schools. Cognitive outcomes, as measured in the PISA scores, differ enormously between school types and schools. School types that aimed at preparing students primarily for a college education show considerably higher average PISA scores. Whereas the public discussion in Austria centers on the question, whether the different school types should be abolished and all kids between 10 and 14 should be taught together in one type of school, our experiment with peer group effects relies on (limited) variation within school types and schools of upper secondary education. Assessing the abolishment of early stratification in Austrian schools, therefore, is an extrapolation of our results. Acknowledgements Thanks to René Böheim, Daniele Checchi, Christian Dustmann, Steve Machin, Pedro Martins, seminar participants in Innsbruck, Linz and Mannheim as well as two referees for helpful comments. Rudolf Winter-Ebmer is also associated with IZA, Bonn and CEPR, London and acknowledges support from the Austrian FWF.
References Ammermüller A, Pischke JS (2006) Peer effects in European primary schools: evidence from PIRLS. Institute for the Study of Labor (IZA), Bonn, Discussion paper no. 2077 Angrist JD, Lang K (2004) Does school integration generate peer effects? Evidence from Boston’s Metco program. Am Econ Rev 94(5):1613–1634 Arcidiacono P, Nicholson S (2005) Peer effects in medical school. J Public Econ 89(2–3):327–350 Bandura A (1986) Observational learning, chapter 2. In: Social foundations of thought and action. A social cognitive theory. Prentice-Hall, USA Bayer P, Pintoff R, Pozen DE (2004) Building criminal capital behind bars: peer effects in juvenile corrections. Yale School of Management, Working Paper, July 2004 Bénabou R (1993) Workings in a city: location, education and production. Q J Econ 108(3):619–652 Betts JR (1998) The impact of educational standards on the level and distribution of earnings. Am Econ Rev 88(1):266–275 Betts JR, Zau A (2004) Peer groups and academic achievement: panel evidence from administrative data. Public Policy Institute of California, February 2004 Brunello G, Giannini M, Ariga K (2006) The optimal timing of school tracking. In: Peterson P, Wößmann L (eds) Schools and the equal opportunity problem. MIT Press, Cambridge USA (in press) Eisenberg D (2004) Peer effects for adolescent substance use: do they really exist? Berkeley School of Public Health, Working Paper, March 2004 Falk A, Ichino A (2006) Clean evidence on peer effects. J Labor Econ 24(1):39–57 Fertig M (2003) Educational production, endogenous peer group formation and class composition. Evidence from the PISA 2000 study. Institute for the Study of Labor (IZA), Bonn, Discussion paper no. 714 Gächter S, Thöni C (2005) Social learning and voluntary cooperation among like-minded people. J Eur Econ Assoc 3(2–3):303–314 Ganzeboom HBG, DeGraaf PM, Treiman DJ (1992) A standard international index of occupational status. Soc Sci Res 21:1–56 Gibbons S, Telhaj S (2006) Peer effects and pupil attainment: evidence from secondary school transition. London School of Economics, CEP Working paper
Peer effects in Austrian schools
155
Hanushek EA (1997) Assessing the effects of school resources on student performance: an update. Educ Eval Policy Anal 19(2):141–164 Hanushek EA (1999) The evidence on class size. In: Mayer SE, Peterson P (eds) Earning and learning: How schools matter. Brookings Institution, Washington, pp 131–168 Hanushek EA (2002) Evidence, politics, and the class size debate. In: Lawrence M, Rothstein R (eds) The class size debate. Economic Policy Institute, Washington, pp 37–65 Hanushek EA, Wössmann L (2006) Does educational tracking affect performance and inequality? Differences-in-differences evidence across countries. Econ J 116(510):C63-C76 Hanushek EA, Kain JF, Markman JM, Rivkin SG (2003) Does peer ability affect student achievement? J Appl Econ 18(5):527–544 Haider G et al. (2001) PISA 2000. Nationaler Bericht. Österreichischer Studien-Verlag, Innsbruck. (http://www.pisa-austria.at/pisa2000/index.htm.) Hoxby CM (2000) Peer effects in the classroom: learning from gender and race variation. National Bureau of Economic Research, Working paper no. 7867 Koenker R, Bassett G Jr. (1978) Regression quantiles. Econometrica 46(1):33–50 Kooreman P (2006) Time, money, peers, and parents: some data and theories on teenage behavior. J Popul Econ (in press) Krauth B (2005) Peer effects and selection effects on youth smoking in Canada. Can J Econ 38(3) Krueger AB (2003) Economic considerations and class size. Econ J 113(485):F34-F63 Krueger D, Kumar KB (2004) Skill-specific rather than general education: a reason for US-Europe growth differences? J Econ Growth 9(2):167–207 Krueger AB, Whitmore DM (2001) The effect of attending a small class in the early grades on college-test taking and middle school test results: evidence from project STAR. Econ J 111(468) Lazear EP (2001) Educational production. Q J Econ 116(3):777–803 Levin J (2001) For whom the reductions count? A quantile regression analysis of class size and peer effects on scholastic achievement. Empir Econ 26:221–246 Manski CF (1995) Identification problems in the social sciences. Harvard University Press, Massachusetts Manski CF (2000) Economic analysis of social interactions. J Econ Perspect 14(3):115–136 McEwan PJ (2003) Peer effects on student achievement: evidence from Chile. Econ Educ Rev 22:131–141 Organisation for Economic Co-operation and Development (2001) Knowledge and skills for life. First results from PISA 2000, Paris. http://www.pisa.oecd.org Organisation for Economic Co-operation and Development (2002) PISA 2000 technical report, Paris. http://www.pisa.oecd.org/tech/intro.htm Organisation for Economic Co-operation and Development (2004) Learning for tomorrow’s world: first results from PISA 2003, Paris. http://www.pisa.oecd.org Organisation for Economic Co-operation and Development (2005) PISA 2003 technical report, Paris. http://www.pisa.oecd.org Robertson D, Symons J (2003) Do peer groups matter? Peer group versus schooling effects on academic achievement. Economica 70:31–53 Sacerdote B (2001) Peer effects with random assignment: Results for Dartmouth roommates. Q J Econ 116(2):681–704 Schindler-Rangvid B (2003) Educational peer effects. Quantile regression evidence from Denmark with PISA 2000 data, chapter 3. Do schools matter? Ph.D. thesis, Aarhus School of Business, Denmark Schneeweis N (2006) On the integration of immigrant children in education. University of Linz, Working Paper Soetevent A, Kooreman P (2006) A discrete choice model with social interactions; with an application to High School teen behavior. J Appl Econ (in press) Vigdor J, Nechyba T (2004) Peer effects in North Carolina public schools. Duke University Durham USA, Working Paper, July 2004 Winston GC, Zimmerman DJ (2006) Peer effects in higher education. In: Hoxby C (ed) College decisions: How students actually make them and how they could. University of Chicago Press for the NBER (in press) WößmannL (2003a) Schooling resources, educational institutions and student performance: the international evidence. Oxf Bull Econ Stat 65(2):117–170
Fair ranking of teachers Hendrik Jürges · Kerstin Schneider
Accepted: 25 September 2006 / Published online: 24 October 2006 © Springer-Verlag 2006
Abstract Economic theory suggests that it is optimal to reward teachers depending on the relative performance of their students. We develop an econometric approach, based on stochastic frontier analysis, to construct a fair ranking that accounts for the socio-economic background of students and schools and the imprecision inherent in achievement data. Using German PIRLS (IGLU) data, we exploit the hierarchical structure of the data to estimate the efficiency of each teacher. A parsimonious set of control variables suffices to get a “fair” estimate of unobserved teacher quality. A Hausman–Taylor type estimator is the preferred estimator because teacher efficiency and some exogenous variables may be correlated. Keywords Teacher quality · Fair ranking · Accountability · Stochastic frontier · Hausman–Taylor estimator JEL Classification
I21 · I28
1 Introduction In this paper, we illustrate and discuss an econometric approach to compute “fair” rankings of teachers or schools using the outcome of large-scale assessment studies. By “fair” we mean that important determinants of student
H. Jürges (B) MEA, University of Mannheim, L13,17, 68131 Mannheim, Germany e-mail:
[email protected] K. Schneider Department of Economics, University of Wuppertal, Gaußstr. 20, 42097 Wuppertal, Germany e-mail:
[email protected]
158
H. Jürges, K. Schneider
achievement that are beyond the control of the individual teacher are controlled for, and that the imprecision inherent in measures of student achievement is accounted for. We rank teachers according to their efficiency or quality, where the efficiency estimate is essentially the residual from a regression of observable outputs on observable inputs. By adopting the definition of technical efficiency from empirical production analysis, we draw on the analogy between the production of goods and the production of skills. Why are we interested in ranking teachers by their efficiency and why should fairness be a specific concern? First, we think of ranking teachers by quality measures as an incentive device to improve the quality of the education system. In addition to the social background of each student, teacher quality is arguably one of the most important arguments in the education production function. One, at least to economists, obvious incentive device to increase teachers’ effort and to improve the quality of education is performance-related pay, where academic performance levels of students or changes in academic performance are used to measure teacher quality. See e.g. the discussion of accountability systems in the USA (Hanushek and Raymond 2004, Jacob 2005), performance management systems in the UK (Propper and Wilson 2003), or the yardstick competition model developed in Jürges et al. (2005b). While there are convincing theoretical arguments in favor of performancerelated pay for teachers, the empirical evidence on the relationship between teacher salaries and teacher quality is not unambiguous. For instance, Hanushek et al. (1999) show that teachers’ mobility between schools is hardly affected by incentive (salary) schemes but strongly driven by non-monetary aspects like the composition of the student body. Moreover, they find no consistent relationship of school district mathematics and reading test scores with district teacher salaries. However, as shown in Ballou (2001), the failure of performance-related pay in public schools may not be inherent in teaching as such, but due to specific factors such as the opposition of teacher unions. Apart from tying teacher’s pay to the quality of teaching, higher quality could also be enforced by stricter certification and licensing provisions. As Angrist and Guryan (2004) show, this strategy can fail: the introduction of state-mandated teacher testing in the US has increased teacher wages with no corresponding increase in quality. Evidence in favor of positive effects of performance related pay schemes is found by Lavy (2002, 2004), who evaluates the effects of incentive pay for Israeli high-school teachers in grades 10–12 using data from rank-order tournaments. The tournaments were conducted in five different subjects: English, Hebrew and Arabic, math and other. In each tournament teachers could win up to $7,500, which is a significant sum compared to the mean gross annual income of $30,000. Lavy shows that performance incentives had a significant positive effect on student achievement both in terms of higher test scores and lower high school drop-out (especially among students from disadvantaged backgrounds). This can be mainly attributed to changes in teaching methods, after school teaching, and increased responsiveness to student needs. Notably, Lavy finds no evidence for the manipulation of test scores. A comparison of the teacher incentive program to a conventional education program that increased school resources
Fair ranking of teachers
159
shows that the latter had a larger effect on three achievement measures (average test scores, number of credit units and the proportion of pupils sitting for matriculation exams) than the incentives program, but a smaller effect on three other outcome measures (number of science credit units, proportion of students who earned matriculation certificates, and the drop out rate). However, costs per school of the resources program were twice the costs of the incentives program. Overall, the teacher incentive program proved to be more cost-effective than giving additional resources to schools in order to increase teaching time, split classes into smaller study groups, and provide extra coaching for weak students. Rankings and performance-related pay schemes meet strong opposition from teachers’ unions wherever they are discussed or implemented. They introduce elements of competition into the school system that have been previously unknown. Although there may be good arguments not to introduce too many external incentive devices into the system, for example because extrinsic motivation can crowd out intrinsic motivation, the argument brought forward most often by teachers’ unions is an alleged lack of fairness. For instance, before the introduction of a payment scheme in the UK, a teachers’ union leader argued that “all the evidence points to the fact that background factors over which teachers have no control undermine the validity of pupil progress being used for pay assessments” (Payne 2000). A similar argument has been put forth in New Zealand, where performance-related pay has been introduced a few years ago: teachers feared that “standards assume all schools to be the same but that teaching in a decile one school where students are socially and economically deprived is immeasurably more difficult than teaching in a decile ten school where students are advantaged in a number of ways” (Sullivan 1999). Recently, a group of seven German federal states have introduced regular standardized tests of student skills at different grades in primary and secondary schools (VERA). German teacher’s unions object to the publication of schoollevel results or even rankings constructed from these results, because—as they say—different schools operate under very different conditions. In reaction, politicians have promised not to use the results to rank schools or teachers. To the extent that this is only motivated by fear of the alleged unfairness of rankings based on student achievement, this policy appears to be overcautious and misguided. The problem of fair rankings is of course not new. One problem, the imprecision of school accountability measures, is addressed in Kane and Staiger (2002a,b). The reliability of test scores as a measure of school quality might be affected by transitory effects such as “a dog barking in the parking lot, inclement weather on the day of the test” (Kane and Staiger 2002a, p. 95). Moreover, as Kane and Staiger suggest, the imprecision of average test scores may result from sampling variation in small samples stemming from the idiosyncrasies of the sample of students tested. As a consequence, large schools are less likely to experience large changes in performance than small schools, which reduces the reliability of value-added measures of performance. One solution, not discussed in Kane and Staiger, is to estimate teacher efficiency as random effects rather than simple averages (see Sect. 2).
160
H. Jürges, K. Schneider
The obvious need to account for the socio-demographic composition of the student body is at least rudimentarily reflected in most school accountability systems. For example, when the public school accountability system was introduced in the US, several states have constructed “similar schools indices” to ensure that each school is compared only with comparable schools (see e.g. http://www.cde.ca.gov/ta/ for information on the Similar Schools Index of California). Similar schools are usually those with similar—broadly defined— socio-demographic backgrounds, for example ethnic composition or the percentage of students participating in free lunch programs. One problem with the existing school indices is that their construction is rather ad hoc and driven by data availability. In this paper we propose a regression-based method to construct “fair” rankings of teachers and schools. We also suggest a parsimonious set of background variables with high explanatory power for individual student achievement to be collected at low cost in the course of the standardized test. Moreover, we suggest to put more emphasis on measuring the performance of individual teachers and not the average performance of schools. This is supported by the evidence presented in Lavy (2002, 2004), which suggests that incentives work more effectively if they target teachers directly. We illustrate our method using German primary school data that are originally part of a larger international study (The Progress in International Reading Literacy Study or PIRLS). The German primary school data is well suited for our purpose for several reasons. First, because of differences in education systems, measuring relative efficiency across countries is a very difficult task. We thus confine ourselves to a single country analysis. Second, in education research, it is essential to have a good understanding of the institutional details of the education system under study. One salient feature of German primary schools is that students typically stay with the same teachers for at least two, in most cases even for 4 years, i.e., for the entire primary school period. Moreover, the class teacher teaches most or all of the subjects in primary school. This is not true for secondary schools, where teachers are specialized. The majority of the students in our data have thus been taught reading and most of the other subjects by one teacher only, and we can attribute the learning progress to a single teacher. A last noteworthy characteristic of the German primary school system is that school choice is very limited (although this is currently changing in some federal states). Children are generally allocated to public schools according to the school district, and private schools are rare. Hence, sorting occurs mainly on observable socio-economic background characteristics but not on innate (and unobservable) ability. Compared to other countries that already use school accountability systems, German education policy is still overly cautious to measure performance and to make the results public. This is different in the USA. While the opponents of school accountability may have valid arguments, the proponents appear to outweigh the opponents. For instance, 94% of the US public favor testing and standards (Hoxby 2002) and the effects on average performance of students appear to be significantly positive (e.g. Hanushek and Raymond 2004). The cost effectiveness of accountability is unmatched by any other reform of the
Fair ranking of teachers
161
schooling system. Even the most expensive accountability programs in the US cost less than a quarter of one percent of per pupil spending (Hoxby 2002). A modest 10% reduction in class size costs 124 times more than the average accountability system in the USA. A 10% increase in teacher compensation costs 88 times more that the average cost of assessment. In the past, German students have not been tested regularly on standardized tests, and while some federal states had at least central exit examinations (cf. Jürges et al. 2005a), the results of the tests were generally not published on the school level. Thus parents and students could not use test results as a signal of school quality. However, this may change in the future because performance-related pay will be introduced for German civil servants in general. Most teachers in Germany are still civil servants. School or teacher accountability systems could serve as a basis to reward teachers depending on their performance. The method to construct “fair rankings” proposed in the present paper might thus be worth being discussed when performance pay for teachers is finally due. The remainder of the paper is organized as follows. In Sect. 2, we describe and discuss different regression-based (stochastic frontier) methods to estimate the effectiveness of individual teachers. Section 3 gives a description of the German PIRLS data, which we use to demonstrate these methods. In Sect. 4, we present the results, and Sect. 5 summarizes and concludes. 2 Measuring teacher efficiency in a stochastic frontier model The method used in this paper to estimate teacher efficiency is based on the following generic panel data model: yis = zi γ + xis β + ui + vis
(1)
In (1) yis is the test score of student s in teacher i’s class (the output). The inputs xis are the students’ background variables varying within a class, whereas the input variables in zi are constant within a class or for the teacher. Variables in xis are for instance the parents’ educational background. Examples of the variables in zi are the class size or the percentage of students with immigrant backgrounds. The error term is composed of vis , the usual i.i.d. error term, and a teacherspecific error term ui that captures the efficiency or quality of teacher i in education production. In the following we use quality and efficiency of teachers as synonyms. The stochastic frontier literature that has developed methods to estimate ui is deeply rooted in productivity analysis which attempts to estimate the “efficiency” of a firm. In this context efficiency means the unobserved heterogeneity due to the managerial quality of the firm. Put differently: given identical observable inputs, the relative efficiency of the firm explains why the level of output still differs across firms. Equation (1) describes a typical stochastic
162
H. Jürges, K. Schneider
frontier model using panel data (see Kumbhakar and Lovell 2000 for a general introduction to stochastic frontier analysis). Unlike in a usual panel data model we do not have observations across individuals and over time. Rather, students are nested in classes (and/or have common teachers) and we have hierarchical data in which class size corresponds to the time dimension and classes (or teachers) are the observation units. Our data is thus amenable to classical panel data econometrics. In the education literature, panel models are routinely used but they are known has multilevel models or hierarchical linear models. The difference of our data to panel data used in most of the economics literature is that observations have no chronological order. Thus we do not have to account for unobserved time-varying influences that induce greater dependence between observations that are closer together in time. In our estimation, the unobserved quality ui is the central variable. This is clearly in contrast to usual panel data models, where the individual effect is not of interest in itself, but where appropriate treatment of the individual effects is necessary for a consistent estimation of the slope coefficients β and γ . Given panel data, there are several models to estimate ui . The simplest way is the fixed effects or dummy variable model. The uˆ i are parameters of dummy variables ˆ Alternatively, the uˆ i can for each teacher that are estimated along with the βs. be recovered as the teacher-average of the combined residual εˆ is = uˆ i + νˆ is . The advantage of the fixed effects estimator is that the slope parameters βˆ can be consistently estimated even if the inputs are correlated with the teacher effects ui . The only assumption for fixed effects slope parameter estimates to be consistent is that the inputs are not correlated with the random error vis . Fixed effects estimation has two drawbacks. First, although the estimated efficiency parameters are unbiased, they are only consistent for very large numbers of students per teacher. Unbiasedness might not impress teachers whose students had an unlucky day and who are told that would the students in their class be tested again and again, fixed effects would find the true teaching quality on average. Second, the effects of all teacher- or class-invariant variables are still included in the estimated fixed effects. One way to account for the effect of class-invariant variables while retaining the advantages of the fixed effects estimator is a two-step procedure sometimes found in the applied econometrics literature (e.g. Black and Lynch 2001): the first step is to estimate the fixed effects model as described above. The second step is to regress the individual effect obtained in the first step on the class-invariant variables. If individual-specific means x¯ i of all teacher-varying variables are included as well in the second step, this two-step model is equivalent to a pooled OLS regression of yis on xis , x¯ i , and zi . However, we know that this cannot be the best way to proceed because pooled OLS ignores the intra-teacher correlation of its error term. This suggests to regress yis on xis , x¯ i and zi using a random effects (or variance component) model. In a random effects model, the individual specific term ui is not freely estimated but it is assumed to be normally distributed with mean zero and variance σu2 , and to be uncorrelated with the explanatory variables. It is a
Fair ranking of teachers
163
common misperception in the economics literature that a significant Hausman test suggests to abandon the random effects model in favor of a fixed effects model (Skrondal and Rabe-Hasketh 2004). If, as we suggest, the individual-specific means x¯ i of all explanatory variables that vary within classes or teachers are included in the regression, their regression parameters can be consistently estimated along with coefficients for the class-invariant variables. Note that conditional on x¯ i , the xis are orthogonal to the individual specific effect ui by construction. Of course, correlations between νis and xis or zi , or between ui and zi remain potential problems that need be addressed by (albeit different) instrumental variables strategies. In the context of our application, in which we are interested in the residual rather than the slope parameters, random effects estimation has another advantage compared to fixed effects estimation. As mentioned before, in the fixed effects model, the individual-specific effect can be computed (without bias) as the within-teacher average of the combined error term. However, when the number of observations per teacher Si is small, the estimate of the within-teacher average can be very sensitive to sampling variation. The random effects estimator accounts for the number of observations per teacher because it “shrinks” the within-teacher average of the combined residual in order to obtain the individual specific effect: = 1− uˆ RE i
S σˆ ν2 1 i RE εˆ is , σˆ ν2 + Si σˆ u2 Si
(2)
s=1
where the expression in parentheses is the shrinkage factor. With given σˆ u2 and σˆ ν2 , the shrinkage factor increases in Si and approaches unity as Si approaches infinity. The random effects estimator is thus conservative in the sense that when there is little information for a teacher (i.e. few students), the estimate is close to zero, i.e. the average across all teachers. Since random effects estimates assume that teacher quality is “drawn” from some probability distribution, teachers with only few students have estimates closer to the mean of this distribution. When we have no information at all on the students of a teacher, our best estimate is the overall mean (see Goldstein 1997). One possible source of endogeneity in the context of our study, which is commonly overlooked in the school effectiveness literature, is the sorting of teachers. Better teachers may for instance be able to choose schools or classes within schools that are less problematic in terms of their learning abilities or socioeconomic background in general. It is also conceivable that better teachers are sent into more problematic classes. In any case, such sorting can potentially bias the estimates of γ because ui and zi are correlated. A random effects instrumental variables estimator that accounts for this type of endogeneity was developed by Hausman and Taylor (1981). The Hausman–Taylor (HT) estimator works under the assumption that some of the background variables are in fact exogenous. Specifically, Hausman and Taylor partition zi and xis as zi = (zi1 , zi2 ) and xis = (xis1 , xis2 ). The variables
164
H. Jürges, K. Schneider
in zi1 and xis1 are assumed to be uncorrelated with ui but the variables in zi2 and xis2 are allowed to be correlated with ui . The HT-estimator is obtained in a three-step procedure. In the first step, consistent estimates for β are computed using a fixed effects regression as described above. The second step is an instrumental variables regression of the individual-specific error term obtained in step one on the time-invariant explanatory variables zi = (zi1 , zi2 ). Since the zi2 are assumed to be endogenous they are instrumented by the exogenous time-varying variables xis1 . The order condition for identification is that the number of class-varying variables in xis1 is larger than the number of classinvariant endogenous variables in zi2 . Moreover, the correlation between the instruments and zi2 needs to be sufficiently large to avoid a weak instrument problem. The second step yields consistent but inefficient estimates for γ . The first and second step estimators are then combined to compute a residual term with variance components σˆ ν2 and σˆ u2 , which in turn are used to quasi-demean all variables (as in a regular random effects model). In the third step, the quasidemeaned dependent variable is regressed on the quasi-demeaned right hand side variables using as instruments zi1 , x¯ i1 , and the deviations from the individual-specific means xis1 − x¯ i1 , xis2 − x¯ i2 . In this paper, we concentrate on regression based approaches to efficiency estimation. A conceptually different but widely used alternative method is data envelopment analysis (DEA). DEA is a nonstochastic linear programming approach that can handle multiple outputs and that is nonparametric with respect to the functional form of the underlying production technology and the distribution of the efficiency term. The literature shows that both methods have their merits and drawbacks. Gong and Sickles (1992) and Sickles (2005) examine the relative performance of data envelopment and regression based (stochastic frontier) analyses with panel data. The fairly robust result is that stochastic frontier analyses perform better than DEA when there is a large measurement error, provided that the production technology is specified correctly. While the drawback of regression-based approaches with respect to the functional form of the production technology is indisputable, we regard the measurement error problem as more serious when dealing with student test score data. Moreover, DEA is not well suited to account for a large number of explanatory variables. In fact, when we apply DEA to our problem, using the same explanatory variables as for estimating a stochastic efficiency term, we find that nearly all teachers are at the efficient margin, i.e., a meaningful ranking is hardly possible.
3 The German PIRLS data Next to TIMSS and PISA, PIRLS is the third large-scale student assessment of German students. In 2001, 35 countries participated in PIRLS (in Germany better known as IGLU), which tested the reading literacy of students in 4th grade (9–10 years old). PIRLS collected a large amount of information about the socio-economic background of the students, both from the students and
Fair ranking of teachers
165
their parents (“home questionnaire”) and additional information about the school from teacher and principal questionnaires (the data are available freely at http://www.timss.bc.edu/). In the following we attempt to identify a parsimonious set of variables from the questionnaires that exhaustingly reflects the socio-economic background of the students. Table 1 describes the variables used in the analysis, using data from the student, principal, and teacher questionnaires. We include only students with valid information on all variables. The total number of students in our working sample is 4,964 and the number of teachers is 279. The dependent variable used in the analysis is the PIRLS reading achievement Rasch score, which is standardized to mean 150 and standard deviation 10. The first part of Table 1 summarizes the information from the student questionnaire. Fifty percent of the students are boys. Sixteen percent of the students are older than the typical 4th grader and 4% are younger. Students can be older when they had to repeat a class or were not mature enough for school at the age of six. Similarly, if a child is intellectually gifted and mature it can enter school before the age of six (for an econometric analysis of the choice and effects of age at school entry in Germany see e.g. Fertig and Kluve 2005, Puhani and Weber 2005, Jürges and Schneider 2006).
Table 1 Data description Sourcea
Reading score Boy Late entry/repeated class Early entry/skipped class 0–10 books 11–25 books 26–100 books 101–200 books >200 books Daily newspaper Own room Foreign-born Foreign-born parent No German spoken at home N (students) Urban region Suburban region School size Teacher has help in class Library in class % Economically disadvantaged % Economically affluent Class size n (teachers)
S S S S S S S S S S S S S 4,964 P P P T T P P T 279
Mean
Within SD
Between SD
Between min
Between max
150.3 0.50 0.16 0.04 0.09 0.25 0.35 0.16 0.15 0.62 0.80 0.09 0.29 0.11
8.04 0.49 0.35 0.19 0.27 0.41 0.46 0.35 0.33 0.46 0.38 0.28 0.41 0.29
3.93 0.13 0.13 0.06 0.10 0.13 0.13 0.10 0.11 0.16 0.15 0.10 0.20 0.11
134.4 0.14 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.06 0.20 0.00 0.00 0.00
159.1 1.00 1.00 0.33 0.50 0.71 0.80 0.44 0.53 1.00 1.00 0.50 1.00 0.64
0.47 0.42 129.7 0.49 0.38 17.14 17.58 3.98
0.00 0.00 48.0 0.00 0.00 5.00 5.00 9.00
1.00 1.00 768.0 1.00 1.00 75.0 75.0 32.0
0.33 0.22 289.6 0.41 0.83 18.5 15.4 22.5
a S,T,P student, teacher, principal questionnaire
166
H. Jürges, K. Schneider
The next set of variables refers to the educational background of the students. The number of books at the children’s home differs substantially. Nine percent of the households own not more than 10 books, whereas 15% have more than 200 books. Sixty-two percent of the students report to have a daily newspaper at home. The measure of wealth used in our paper is whether the students have their own room at home, which is true for 80% of the students. We initially included in our regressions other variables such as the ownership of a lawn mover as a proxy for having a garden and the existence of a second car in the child’s family. Both variables turned out to be insignificant; hence they are not included in the final specifications. Information about the migration background of the students is also important. To distinguish between 1st and 2nd generation foreigners, we constructed two variables: we code a student as a first-generation migrant if she was at least 1-year old before she came to Germany, which is true for 9% of the sample. In addition we controlled for the migration background of the parents by adding a variable that is one if either parent is foreign born or if the children were not sure. Almost 30% of the sample children have at least one parent with a migration background. To account for the status of integration, we control for the language spoken at the student’s home. Eleven percent of the sample report to speak German at home only occasionally or never. The second part of Table 1 describes the variables obtained from the teacher and principal questionnaires. These variables are constant within classes or teachers. The average class size is 22.5 students. Eighty-three percent of the classes have a library in the classroom. Forty-one percent of all teachers have at least occasionally help from reading specialists or other teaching aids. The majority of the schools are either located in an urban or suburban community and on average there are 290 students enrolled in each school. As regards the composition of students in the school, principals were asked separately about the percentage of students from economically disadvantaged and affluent homes. The average proportion of disadvantaged and affluent households is 18.5 and 15.4%, respectively. At first glance, it may seem odd not to use the data from the home questionnaires to measure the individual students’ background. Usually, parents should be able to give more reliable answers than 10-year-old children. We still decided not to use information from the parent questionnaires. In general, unit non-response rates in PIRLS have been much higher for parents (12%) than for students (1.6%), teachers (7%) or principals (5%). Thus, we suspect that cooperation would also be low if filling out home questionnaires became a regular exercise. There is a great deal of item non-response in the home questionnaires as well, reducing the number of observations with valid information from the parents even further. Moreover, having a closer look at the patterns of parental non-response, it turns out that non-response is selective. The higher a student’s reading test score, the larger the probability that the parents complete the home questionnaire, that they do report their educational degree, or that they report whether they have read to their child at pre-school age. We further used the parents’ data to check the reliability to the students’ reports of
Fair ranking of teachers
167
the number of books at home (a very important background variable). There are—as expected—differences in the answers of children and parents. Only 36% of students and teachers report the same range of number of books. But in another 40% of the cases, parents and children place themselves in neighboring categories. Parents report a larger number of books than their children in 45% of the cases and only 19% of the children report more books than their parents. However, despite these differences, we have no evidence that parents’ answers are more reliable than student’s answers. If both variables measure the same underlying concept, the less reliable measure (the one with the larger measurement error) should have a smaller correlation with the dependent variable. However, both have very much the same correlation with the students’ test scores. The R-squared of a regression of test scores on the number of books reported by parents is 0.102 and the R-squared of a regression of test scores on the number of books reported by students is 0.100. Moreover, the differences between the parents’ and the student’s answers appears to be uncorrelated with student achievement, thus there is no systematic measurement error. Because we view the apparent selectivity in parents’ non-response as a serious data problem, we decided not to include the information from the parent questionnaire. One particularity of our data is that we have test scores measured at one point in time only. Thus it is not feasible to compute value added measures of teacher quality. This could be a problem for evaluating secondary schools but is no problem for the primary school data used here. Fifty-nine percent of the schools report that teachers stay with the same class for 4 years, i.e., all throughout primary school. In 34% of the schools teachers stay with the same class for 2 years, and in 7% of the sample teachers typically stay with the class for 3 years. The decision on the duration of the student–teacher relationship is generally taken on the school level. Schools in which the time with the same teacher varies greatly or is typically only 1 year were excluded from the analysis. These are less than five percent of the schools. Interestingly, there appears to be a negative correlation between the educational and economic background of the students and the duration of the student–teacher relationship. In school with students from more advantaged backgrounds, teachers tend not to stay for the entire 4 years with the same class. To check whether this affects our results, we excluded teachers who stayed with the class for less than 4 years, i.e. throughout primary school. The results remained qualitatively unchanged. The selection of the included variables is of great importance, because the reliability of our estimated teacher effects depends on the inclusion of all relevant variables, describing the student’s and the school’s background. For selecting the included variables we tried various specifications—not reported here—with a much richer set of variables. However, since the quality of the estimation did not improve, for example by adding additional measures of wealth, we decided to use a parsimonious set of variables typically used in the literature to explain student achievement.
168
H. Jürges, K. Schneider
4 Results Our analysis proceeds in two steps. The first step is the regression-based efficiency estimation as described in the previous section. Besides estimating the quality of each teacher, this step also aims at identifying the set of variables needed to construct a fair ranking of teachers. In the second step, we rank teachers using the estimated quality parameters and compare this “fair” ranking to a ranking based on the raw scores, i.e., we compare the unconditional ranking with a ranking that is conditional on the socio-economic background of the students. 4.1 Regression-based efficiency estimation To estimate the education production function given in Eq. (1), we need to choose the relevant background variables. The expression zi γ + ui contains the effect of all measured and unmeasured teacher-invariant characteristics on student achievement. The question which variables should be included in zi is not trivial. It is important only to use variables that describe the teacher’s environment, i.e. factors that are beyond the control of the individual teacher, but not the teacher’s attitude towards teaching, formal qualification or the effort of schools and teachers (cf. Ladd and Walsh 2002). We also exclude the teachers’ experience, one of the few observable quality indicators, from the regression, because the teaching quality we want to measure (and reward) should not be defined conditional on teacher experience. On the contrary, if more experienced teachers are better than less experienced teachers, we want to make sure that these teachers get higher salaries. In fact, performance–pay schemes should be based of the efficiency measures that do not condition on teacher characteristics, but only on observables that cannot be influenced by the teacher, for instance the economic background of the students, the level of education of the parents or the equipment of the school. Teaching methods, however, or cooperation among colleagues are certainly important dimensions that define the ability or quality of teachers one wants to reward. Table 2 contains the results of fixed effects, random effects, and HT regressions of student test scores on three sets of explanatory variables—student characteristics and context variables drawn from the teacher and the principal questionnaire. The fixed effects regression—shown in column 1—eliminates all variables that do not vary on the teacher level. Only student characteristics are retained, for which we briefly discuss the results: • Girls outperform boys. Since the dependent variable has a standard deviation of 10, the estimated parameter of −0.748 translates into a gender gap in achievement of about 7% of one standard deviation. Similar gaps can be found in each country that participated in PIRLS (Mullis et al. 2003). • Children who are older than the typical fourth grader perform worse, whereas younger children perform better. This is not an age effect but
Fair ranking of teachers
169
Table 2 Regression results Fixed effects (1)
Boy Late entry/repeated grade Early entry/skipped grade 11–25 books 26–100 books 101–200 books >200 books Daily newspaper Own room Foreign-born Foreign-born parent No German spoken at home
−0.748** (3.31) −2.742** (8.50) 1.268* (2.24) 1.712** (3.85) 3.397** (7.69) 4.337** (8.71) 5.054** (9.97) 2.492** (10.24) 1.327** (4.48) −1.395** (3.30) −1.170** (3.89) −2.569** (6.18)
IV-random effects (2) Within-teacher part −0.748** (3.31) −2.742** (8.50) 1.268* (2.24) 1.712** (3.85) 3.397** (7.69) 4.337** (8.71) 5.054** (9.97) 2.492** (10.24) 1.327** (4.48) −1.395** (3.30) −1.170** (3.89) −2.569** (6.18)
Urban Suburban School size Teacher help in class Library, class % econ. disadvantage % econ. affluent Class size Constant Observations Number of teachers
146.104** (292.64) 4,964 279
Notes The absolute value of z statistics are in parentheses * Significant at 5%, ** Significant at 1%
Hausman–Taylor (3) Between-teacher part 0.973 (0.69) −1.070 (0.59) −7.515* (2.29) 1.064 (0.40) 0.605 (0.25) 5.164 (1.85) 5.518* (2.07) 0.182 (0.14) 2.114 (1.29) −2.868 (1.20) −0.999 (0.69) 3.062 (1.24) 0.272 (0.61) 0.033 (0.07) 0.002 (1.11) 0.205 (0.53) 0.151 (0.32) −0.041** (3.24) 0.007 (0.62) −0.126 (0.94) 144.969** (36.35) 4,964 279
−0.748** (3.41) −2.742** (8.74) 1.267* (2.30) 1.712** (3.96) 3.396** (7.91) 4.337** (8.96) 5.054** (10.26) 2.491** (10.53) 1.327** (4.61) −1.395** (3.39) −1.170** (4.00) −2.569** (6.36) 1.024 (0.18) −0.049 (0.01) 0.002 (0.12) 0.093 (0.03) 0.734 (0.18) −0.140 (0.35) 0.057 (0.15) −0.110 (0.14) 148.717** (6.62) 4,964 279
170
H. Jürges, K. Schneider
due to two selection effects. First, before children are admitted to primary school, they are examined and tested whether they are mature enough to enter school. Children who are not mature are admitted 1 year later (and thus older than the majority of their classmates). Second, underachievers often repeat classes. • The intellectual background as described by the total number of books at home and by whether the family has a daily newspaper proves to have large and significant effects on student achievement. • Whether the child has an own room measures household wealth. The effect is positive and significant, but relatively small compared to e.g. having a daily newspaper at home, which has an effect that is almost twice as strong. • As explained above, we measure the migration background of the students by three variables: whether children are born in Germany, whether parents are foreign-born, and whether they speak German at home. We find substantially and significantly lower test scores among children with migration background. To control for the teacher-invariant variables we estimated a random effects model with teacher specific means included along with the other teacher invariant variables. We thus distinguish between a within-teacher part (which is equivalent to the fixed effects model) and a between-teacher part. The coefficients for the teacher-varying variables reported under the “between”-header are the coefficients of the individual-specific means. The coefficients are jointly significant at the 1% significance level, which indicates that the fixed effects efficiency measures are correlated with the explanatory variables varying within classes. In the random effects model, we also address the much-debated endogeneity of the class size by employing an instrumental variable estimator (for a detailed discussion of class size effects in large-scale assessments see West and Wößmann 2006). The instrument used here is the theoretical class-size, which we computed as follows: In the German federal states the maximal class size in primary school is regulated. Due to the federal system the maximal class size is not identical for all states, but there is very limited variation between 28 and 30 students. Unfortunately, we do not know in which federal state the school is located. Hence we set the maximal class size to 29. We then calculate the theoretical class size CT as the average class size given the number of students in 4th grade G and the maximum class size Cmax : CT =
G . (G − 1)/Cmax + 1
(3)
The number of students in 4th grade can be viewed as an exogenous variable, because children in Germany are allocated to primary schools according to school districts. Figure 1 shows the relationship between the actual class size and its instrument, the theoretical class size. The results of the random effects regression are reported in column 2 of Table 2. The class size has an insignificant negative effect, which is in accordance
171
0
10
20
30
Fair ranking of teachers
0
29
58
87
116
145
174
Grade Size Actual Class Size
Theoretical Class Size
Fig. 1 Actual and theoretical class size in the German PIRLS sample
with much of the empirical literature. The only significant variable measured on the teacher level and obtained from the teacher and principal questionnaires is the proportion of economically disadvantaged students in the school. Jointly, the teacher and principal variables’ effects are not statistically significant (with p = 0.14). The most probable reason is that the student variables provide a sufficient description of the teacher’s environment. However, as discussed in Sect. 2, the random-effect estimates do not account for the possibility of a correlation between the efficiency term ui and the teacher invariant variables. We therefore also estimate a Hausman–Taylor-type model. The only difference to the textbook variant of this model is that we allow for “external” instruments (in our case this is the theoretical class size). The potentially endogenous variables are characteristics of schools that might result in sorting of teachers. For instance, some teachers might be better suited to teach difficult children than others and choose to go to or are sent to schools with large proportions of such children. Thus, teacher efficiency might well be correlated with some of the input variables. Teacher-invariant variables that are treated as endogenous are the actual class size and the proportion of economically disadvantaged or advantaged students. Endogenous variables that vary within a class are whether students have a migration background and whether they speak German (the reading test language) at home. We treat the other variables as exogenous because the specific composition of the class is not observable for teachers before they join the school, only school aggregates are observable and hence affect the sorting of teachers. Moreover, we assume that the location of the school, the school size, whether there is extra help in the class, and the existence of a library are exogenous in a sense that they do not affect the sorting of teachers. In column 3 we show the results of the HT estimates. The estimated coefficients on the teacher-varying variables are nearly identical to the fixed effects
172
H. Jürges, K. Schneider
regression. Consequently, a Hausman-test does not reject the HT-specification (χ 2 = 0.0004). None of the teacher-invariant variables has a statistically significant effect on the student’s reading score. To further test the Hausman– Taylor specification, we address the issue of weak instruments. There needs to be sufficient correlation between the instruments and the variables in zi2 . The R2 obtained when the instruments are included in the first stage regressions are between 0.01 and 0.02 and the F tests of the joint significance of the instruments are between 11.6 and 70.1, which indicates that the instruments are sufficiently strong. 4.2 Construction of fair teacher rankings The first column of Table 3 shows the rank correlations of the raw teacher scores with our efficiency estimates that control for socio-economic background. While all regression-based quality measures are positively correlated with the raw score-based ranks, the correlation is much stronger for fixed effects than for HT. As argued in the literature, the results of the efficiency estimation should not be too sensitive to the method of estimation (Sickles 2005). Table 3 shows spearman rank correlations of 0.840 between the IV random effects estimates and the HT estimates, and 0.849 between the fixed-effects estimates and the IV random-effects estimates. The Spearman rank correlation coefficient between fixed-effects and HT is only 0.676. Thus, ignoring teacher invariant information does affect our ranking considerably. Since the HT-model approaches the endogeneity issue in the most comprehensive way and identifies the effect of the teacher-invariant variables, we view the HT estimator as our preferred estimator. In the remainder of this section we will thus concentrate on comparing rankings based on the raw scores with HT estimation-based efficiency terms. The issue of “fairness” in ranking teachers is not only a matter of controlling for background variables or “inputs” that are beyond the control of the individual teacher. Care must also be taken to account for random variability in the allocation of students to teachers and random measurement error in achievement measures. Figure 2 shows confidence intervals for the raw teacher scores and the HT estimation-based efficiency measures. Apparently, many of these confidence intervals are overlapping to a large extent, suggesting that many teachers are not significantly different from each other. We thus suggest to construct a ranking that discriminates three different groups of teachers: (1) teachers that are significantly better than the average (“above average”),
Table 3 Rank correlations between raw scores and efficiency measures
Fixed effects IV-RE HT
Raw score
Fixed effects
IV-RE
0.913 0.658 0.541
0.849 0.676
0.840
Fair ranking of teachers
173
Teacher
best
Hausman-Taylor
worst
worst
Teacher
best
Raw Scores
-20
-10 0 10 Efficiency 90% CI
20
-20
-10 0 10 Efficiency 90% CI
20
Fig. 2 Ninety percent confidence intervals for teacher efficiency measures
(2) teachers that are not significantly different from the average (“average”), and (3) teachers that are significantly worse than the average (“below average”). How the position of a teacher changes when background variables are controlled for is illustrated in Table 4. Consider first the last column. Based on the unconditional scores, we would construct a ranking with 59 out of 279 teachers (21.2%) being above average, 177 (63.3%) being average and 43 (15.4%) being below average. With efficiency measures based on HT-estimates (with somewhat wider confidence intervals and a smaller variance of the efficiency measure itself), the overall number of average teachers increases to 197, i.e., there are less teachers that can be distinguished statistically from the average. In terms of differences between rankings, 66.1% of those who are above average unconditionally are average when HT-based efficiency measures are used and 1.7% are below average. In contrast, of those whose raw scores are below average, 46.5% move up to the average group and 14% become above average according to our efficiency measure. Teachers whose raw scores are Table 4 Transition matrix between efficiency groups based on raw and HT-estimation based efficiency measures (row percentages in parentheses) Raw score
Above average Average Below average N
HT-estimation based score
N
Above average
Average
Below average
19 (32.2) 18 (10.2) 6 (14.0) 43
39 (66.1) 138 (78.0) 20 (46.5) 197
1 (1.7) 21 (11.9) 17 (39.5) 39
59 177 43
174
40 0
20
Percent
60
80
H. Jürges, K. Schneider
much better
better
no change
worse
much worse
Econ. disadv. < 10%
40 0
20
Percent
60
Econ. disadv. >= 10%
much better
better
no change
Low # books
worse
much worse
High # books
Fig. 3 Rank changes following HT-efficiency estimation, by students’ economic and intellectual background
average mostly remain in that group. Overall, 62% of the teachers do not change ranks. Figure 3 shows the rank mobility for teachers working with students from different backgrounds. In the top panel of Fig. 3, we differentiate between teachers at schools with less than 10% of the students from economically disadvantaged backgrounds and teachers at schools with more than 10%, and in the bottom panel, we differentiate by the intellectual background, i.e. the number of books at home. Both graphs are in fact very similar. As expected, teachers in schools with a poor economic or intellectual background tend to benefit from the conditional ranking whereas teachers of students with a good economic or intellectual background lose.
5 Conclusions and outlook If teachers react to performance-related incentive pay by becoming better teachers, as argued in Lavy (2004), the performance of teachers has to be
Fair ranking of teachers
175
measured appropriately. Among economists there is little disagreement that the performance of students on standardized tests can serve as the basis for evaluating the performance of teachers. Unconditional tests scores, however, are not suitable because the socio-economic background of the students has to be controlled for. In this paper, we suggest a method to estimate a teacher’s efficiency, effort or unobserved quality, building on the literature of regression-based (stochastic frontier) efficiency estimation. It turns out that to estimate teacher quality we only need information from the student questionnaire (on the student’s background) possibly plus some additional information on local economic indicators, like income, unemployment rate, or the percentage and structure of migrants in the school district, and easy to verify indicators of school equipment. The PIRLS teacher and principal questionnaires do not add additional relevant information, despite the detail they contain. This is important, as principals and teachers would have no incentives to answer the questionnaires truthfully once they know that their rank and income is influenced by their answers. The relevant information from the student questionnaires that complements the aggregate indicators is easy to obtain by adding a short student questionnaire with four or five questions to the actual tests. Based on different panel estimators that allow for the endogeneity of input variables, we construct an efficiency or quality ranking of the teachers in the sample that consists of three different groups: teachers significantly above average, teachers significantly below average, and teachers statistically indistinguishable from the average. As expected, there are clear differences between the ranking based on the unconditional test scores and the ranking based on the efficiency estimates. Less than two thirds of the ranked teachers remain in the same quality group after the student and school background has been controlled for. Thus, conclusions about teacher quality that are based on the raw test scores are in fact unfair in the sense that they do not account for relevant background information. We view this paper as a first step to develop a basis for performance related payment schemes of teachers in Germany (which could be also used for evaluations at the school level). For the evaluation of primary school teachers our proposed method is directly applicable. However, it cannot be directly applied to teachers in secondary schools, because unlike in primary schools, a single teacher is not responsible for the learning progress of a class, simply because teachers tend to change more frequently. The efficiency estimates proposed here reflect the quality of all teachers who taught the students in the past. However, if repeated observations are available, one can account for this as well by including the performance measured in t − 1 as explanatory variables into our regression. If German education policy accepts the need to assess the quality of teachers and schools, the natural question is how to use that information. The current policy of quality assurance in German schools is to test students and give a feedback only to the schools. Data reflecting the students’ background are not collected. The resulting measures of school quality only crudely control
176
H. Jürges, K. Schneider
for student background by collecting information from the principals. Moreover, they are not even passed on to the public, impeding competition between schools. In the future, German civil servants will be partly paid according to their performance. A measure of teacher quality along the lines discussed here could be used to create financial incentives for teachers to put forth effort. Using fair rankings as a base for performance related pay would significantly reduce the problems associated with uniform pay schemes for teachers in Germany and could help to produce steeper career profiles. Acknowledgements This paper has benefited substantially from the constructive criticism by three anonymous referees and participants of the 2005 congress of the German Economic Association (Verein für Socialpolitik).
References Angrist JD, Guryan J (2004) Teacher testing, teacher education, and teacher characteristics. Am Econ Rev 94:241–246 Ballou D (2001) Pay for performance in public and private schools. Econ Educ Rev 20:51–61 Black SE, Lynch LM (2001) How to compete: the impact of workplace practices and information technology on productivity. Rev Econ Stat 83:434–445 Fertig M, Kluve J (2005) The effect of age at school entry on educational attainment in Germany. RWI Discussion Papers No. 27 Goldstein, H (1997) Methods in school effectiveness research. School Eff School Improv 8:369–395 Gong B, Sickles RC (1992) Finite sample evidence on the performance of stochastic frontiers and data envelopment analysis. J Econom 51:259–284 Hanushek EA, Kain JF, Rivkin SG (1999) Do higher salaries buy better teachers? NBER Working Paper 7082 Hanushek EA, Raymond ME (2004) The effect of school accountability systems on the level and distribution of student achievement. J Eur Econ Assoc 2:406–415 Hausman JA, Taylor WE (1981) Panel data and unobservable individual effects. Econometrica 49:1377–1398 Hoxby CM (2002) The cost of accountability. NBER Working Paper 8855 Jacob BA (2005) Accountability, incentives and behavior: the impact of high-stakes testing in the Chicago Public Schools. J Public Econ 89:761–796 Jürges H, Schneider K, Büchel F (2005a) The effect of central exit examinations on student achievement: quasi-experimental evidence from TIMSS Germany. J Eur Econ Assoc 3:1134–1155 Jürges H, Richter WF, Schneider K (2005b) Teacher quality and incentives. Theoretical and empirical effects of standards on teacher quality. FinanzArchiv 61:1–25 Jürges H, Schneider K (2006) Age at school entry and teacher’s recommendations for secondary school track choice in Germany. Paper presented at the 20th annual ESPE conference, Verona Kane TJ, Staiger DO (2002a) Volatility in school test scores: implications for test-based accountability systems, Brookings Papers on Education Policy 2002, pp 235–283 Kane TJ, Staiger DO (2002b) The promise and pitfalls of using imprecise school accountability measures. J Econ Perspect 16:91–114 Kumbhakar SC, Lovell CAK (2000) Stochastic Frontier analysis. Cambridge University Press, Cambridge Ladd HF, Walsh RP (2002) Implementing value-added measures of school effectiveness: getting the incentives right. Econ Educ Rev 21:1–17 Lavy V (2002) Evaluating the effect of teachers’ group performance incentives on pupil achievement. J Polit Econ 110:1286–1317 Lavy V (2004) Performance pay and teachers’ effort, productivity and grading ethics, NBER Working Paper 10622
Fair ranking of teachers
177
Mullis IVS, Martin MO, Gonzalez EJ, Kennedy AM (2003) PIRLS2001 International Report: IEA’s Study of Reading Literacy Achievement in Primary Schools. Boston College, Chestnut Hill, MA Payne J (2000) School teachers’ review body gives green light to performance-related pay. http://www.eiro.eurofound.eu.int/2000/11/feature/uk0011100f.html Propper C, Wilson D (2003) The use and usefulness of performance measures in the public sector. Oxford Rev Econ Policy 19:250–267 Puhani P, Weber A (2005) Does the early bird catch the worm? Instrumental variable estimates of the educational effects of age at school entry in Germany. IZA Discussion Paper 1827 Skrondal A, Rabe-Hasketh S (2004) Generalized latent variable modeling. Multilevel, longitudinal and structural equation models. Chapman & Hall/CRC, Boca Raton Sickles RC (2005) Panel estimators and the identification of firm-specific efficiency levels in parametric, semiparametric and nonparametric settings. J Econom 126:305–334 Sullivan K (1999) Teachers standards and professionalism: contested perspectives in a decade of reform. Paper presented at the AARE-NZARE 1999. http://www.aare.edu.au/99pap/ sul99090.htm West MR, Wößmann L (2006) Class-size effects in school systems around the world: evidence from between-grade variation in TIMSS. Eur Econ Rev 50:695–736
School composition effects in Denmark: quantile regression evidence from PISA 2000 Beatrice Schindler Rangvid
Received: 15 August 2003 / Accepted: 15 December 2006 / Published online: 1 March 2007 © Springer-Verlag 2007
Abstract Data from the first wave of the OECD PISA study are combined with register data for Denmark to estimate the effect of the socioeconomic mix of schools on students’ test scores. A major disadvantage of the PISA design for the analysis of school composition effects is the small students-per-school samples. Adding family background data from administrative registers for all same-aged schoolmates of the PISA students helps overcome this. To compensate for endogeneity in the school composition variable, the results are conditioned on a rich set of family and school variables from the PISA data. Quantile regression results suggest differential school composition effects across the conditional reading score distribution, with students in the lower quantiles achieving the largest test score gains. Mathematics results suggest that high- and low-ability students benefit equally from attending schools with a better student intake, and most results for science are only marginally significant. These results imply that mixing students of different home backgrounds could improve equity of achievement for both reading and mathematics; however, the average skill level would improve only for reading literacy. In mathematics, mixing students would not raise average outcomes, because the detrimental effect on students in the higher quantiles would offset positive effects on those in the lower quantiles.
I thank Amelie Constant, Bernd Fitzenberger, Eskil Heinesen, Peter Jensen, Craig Riddell, Michael Rosholm, Nina Smith, Robert Wright and participants at the ESPE and EALE 2003 conferences and at AKF seminars, and two anonymous referees for helpful comments and suggestions. Financial support provided by the Danish Social Science Research Council is gratefully acknowledged. B. S. Rangvid (B) AKF, Danish Institute of Governmental Research, Nyropsgade 37, 1602 Copenhagen V, Denmark e-mail:
[email protected]
180
B. S. Rangvid
Keywords Education · School composition effects · PISA · Quantile regressions 1 Introduction In many large cities in Europe, policy makers are struggling to deal with the consequences of concentrating economically and socially disadvantaged families and minority families in the same residential areas. Ghettos have been a reality in the USA for decades, and the “ghettorisation” is now an important issue in many European countries, Denmark among them (Hummelgaard et al. 1995; Hummelgaard and Husted 2001). Deprived residential areas have substantial social problems such as unemployment, dependence on welfare, and criminality. Education plays a crucial role in the improvement of the opportunities of disadvantaged groups. However, the effects of residential concentration lead to segregation of schools. The rapid rise in numbers of ethnic minority students in Danish cities, along with open enrolment policies which allow students to attend public schools outside their designated catchment areas, have increased the socioeconomic segregation in schools considerably in the past two decades (Glavind 2004): better-off families are opting out of public schools with a high percentage of students with weak socioeconomic (SES) backgrounds. In some areas in Copenhagen City, only 10–15% of the students attend their local public school. The likely detrimental effects of socioeconomic segregation in schools is of great interest to Danish policy makers, who have initiated measures to decrease it by redefining school catchments, developing magnet schools and implementing measures to decrease residential segregation. Further, in 2001 the Danish Ministry of Education—under pressure because of the poor Danish results in the Programme for International Student Assessment (PISA) study— published for the first time ever league tables detailing the results of the national school leaving exams. Those showed, unsurprisingly, that schools in areas with a weak social composition perform worse than schools in more affluent areas. Whether this is because of these students’ individual backgrounds, or the fact that these students are clustered together in particular schools, can be answered only attempting to disentangle selection effects from real effects by econometric modelling. Another reason for studying school-SES effects is that Danish policy makers are considering allowing even more choice in children’s education. Subsidies for private pre-school care, and freer choice among public pre-school care and public schools are on the political agenda. Published studies indicate that allowing more school choice may lead to increased segregation, as it is mostly affluent families who exercise choice (Söderström and Uusitalo 2005). With the prospect of a more liberal child education market, it has become more important to analyse the effects of socioeconomic school segregation in Denmark. An important aspect of the debate about less segregated schools is whether the socioeconomic mix of a school has an effect on students’ test scores beyond the effect of individual students’ characteristics and family backgrounds, and
School composition effects
181
the school’s resources. Research has produced a variety of conclusions. Review studies by Jencks and Mayer (1990) and Thrupp et al. (2002) show that the results vary widely: sometimes effects are found, and sometimes not.1 The connection between school intake and student test scores raises many questions about the causal mechanisms of the effects of the student body composition in schools. The aim of this paper is to examine the evidence from Denmark. In particular, I address two questions: • How does the socioeconomic composition of the school affect the students’ achievements in reading, mathematics and science? • Is the assumption that the homogeneity of effects inherent in the estimation of average effects warranted, or do effects differ for students at different points of the conditional test score distribution? The contribution of this paper is threefold. First, to my knowledge, this is one of the few studies to use quantile regression methods to estimate the effects of school composition (or the closely related issue of peer effects). I am aware of only two related studies (Levin 2001; Schneeweis and Winter-Ebmer 2007) which estimate educational peer effects by quantile regression methods. This method has the virtue of being semi-parametric, allowing the researcher to estimate differing effects over the whole conditional test score distribution. This is particularly important when it comes to the effects of school composition: we must consider outcomes in terms of the best equilibrium, as to give one school a good student is to take that student away from another school. Thus, if low-ability students profit by attending a school with a high socioeconomic intake, it is obvious that the best strategy for the social planner is also to consider how this affects high-ability students. Second, this is the only study to add data for all same-aged schoolmates of the PISA students which corrects a major problem of the original PISA dataset when estimating the effects of school composition. Unlike the original PISA data, which sample only small numbers of students from each school, those additional data create accurate measures of the school/grade composition.2 Finally, this is the first study of the effects of school composition in Danish schools.3 I address the potential endogeneity of the school composition variable (which is the most demanding challenge in this kind of study), by conditioning the results on a comprehensive set of background controls, which is available in the international PISA2000 data. However, as I discuss at length in Sect. 2, even though a rich set of controls is included, it is possible that these results are still 1 Recent work on educational peer group effects include Ammermüller and Pischke (2006), Goux and Maurin (2006) and Gould et al. (2004). 2 However, a drawback is that the addition of administrative data limits this analysis to the Danish subsample of the international dataset. 3 Two OECD (2001, 2003) reports based on the dataset I employ in this study briefly address the
issue of school composition. The correlation between the school composition and test scores is also briefly discussed in Andersen et al. (2001). All studies find that such correlations for Denmark exist, but that its influence is below the OECD average.
182
B. S. Rangvid
biased by selection on unobservables. Bearing this in mind, the results suggest that—at the centre of the conditional test score distribution—attending school with better peers (as measured by the average socio-economic background of students) has a positive and significant effect of non-negligible size on test scores, and this effect is equally strong for all three subject areas. However, analysing the distribution of the benefits of attending a higher SES school over the conditional test score distribution by quantile regression methods reveals different patterns by subject area: although the results for reading suggest that the largest gains occur at the lowest quantiles, the results for mathematics indicate that students of high and low ability benefit equally from attending schools with a better student intake, and quantile regression results for science are at best marginally significant. Section 2 of the paper defines the concept of school composition effects and discusses their identification. Section 3 details the data, and Sect. 4 presents the empirical framework. Section 5 reports results and sensitivity analyses, and Sect. 6 is the conclusion. 2 School composition effects There is a variety of approaches to education around the world, from early tracking of students of different abilities into different types of school (e.g. Germany, Austria) to completely comprehensive schooling until the end of lower secondary schooling (e.g. the Nordic countries, Spain).4 There are arguments for and against each of these systems. Some believe that schools with an intake from differing home backgrounds benefit students in both academic and non-academic ways. However, critics argue that students make better progress if educated with others of similar ability, and that the presence of less able students in schools with a high-level SES intake is harmful for both able and less able students. School composition might influence academic achievement by affecting the learning environment, and by direct peer interactions. The learning environment may be created by differential school or classroom climates and differential teacher practices, as well as by direct peer interactions. An explanation suggested by Wilson (1959) and Blau (1960) in their seminal articles is that one linkage between the school composition and student outcomes is normative. The underlying idea is that individual students internalise the norms of their educational setting to guide their learning and behaviour (Dreeben and Barr 1988). For example, students from middle-class backgrounds, whose social class predisposes them to have high educational aspirations, might lower their aspirations when they attend schools with a working-class student intake. On the other hand, schools with a high socioeconomic intake might develop a student social system that approves of intellectualism. Another explanation is that the effects arise because students use their educational setting to make 4 Information on streaming practices in different countries is offered in OECD (2003), Fig. 7.21.
School composition effects
183
comparisons about their performance and to develop academic self-perceptions. A third explanation is that the effects arise because schools and teachers modify their instructional practices to take into account the characteristics of the student group (Mahard and Crain 1983). School composition can affect individual students’ performances not only directly, but also indirectly, via teachers’ perceptions. Negative perceptions can lead teachers to have artificially low expectations of pupils creating a self-fulfilling prophecy. There is also the possibility that the ability or social mix of the school influences the curriculum. For example, low-SES students may do better in mixed schools, because teachers target their curriculum at the class average, which is likely to be higher in predominantly middle-class schools than in those with mostly low-SES students. This argument assumes that low-SES students respond to superior instruction. Last, the effects may arise from direct student-student interactions and peer tutoring in schools, classrooms and groups (Webb 1991). However, analysing social interactions empirically imposes formidable statistical problems. In his seminal work, Manski (1993) differentiates between two basic components of an agent’s interactions with her schoolmates (endogenous and exogenous effects), and maintains that endogenous and exogenous effects cannot be separated empirically. The first (endogenous) effects refer to how agents are affected by the contemporaneous behavioural choices of group members (in our case, how student achievement depends on the achievement of schoolmates). The second (exogenous) effects are where individual behaviour varies with exogenous observed characteristics of a group; here that student achievement depends on the socioeconomic composition of the schoolmates. Endogenous effects create a “social-multiplier” effect of policy interventions, which can have a spill-over effect to non-treated individuals and back to treated individuals if they affect the behaviour of the former via endogenous interactions with the latter (Manski 1993, 2000). Exogenous interactions, however, do not generate such a “social multiplier”. A distinction between endogenous and exogenous effects is not feasible with the identification approach used in this study. For example, the average socioeconomic background of peers affects the individual student’s outcome because of the “true exogenous contextual effect”, but also because the average socioeconomic background of peers affects their average outcome, and their average outcome may affect the individual student’s outcome (the endogenous peer group effect). However, for the kind of policy intervention in focus here—decreasing social segregation across schools—estimating the combined effect is clearly relevant. The main concern of this study, however, is not to distinguish between endogenous and exogenous effects, but to separate them from correlated effects. These arise when student outcomes are correlated because they come from similar home backgrounds and attend the same schools. In this case, the positive effects of group behaviour on individual behaviour can easily be misinterpreted as social effects, when in fact they are due to characteristics common to all members of a reference group. Correlated effects are most likely to be caused by self-selection, which can have several different sources. First, parents with a strong commitment to their
184
B. S. Rangvid
child’s education may choose to live close to schools they consider adequate. They may use the student composition of the school as an indicator of quality. They may also take into account the pupil-teacher ratio, or the percentage of core subject teachers at the school who are specialists in their fields, because these are considered more conducive to learning. Since a greater involvement in the child’s education may lead to better educational outcomes, such self-selection may bias estimates of the school composition effect upwards because of the non-random assignment to schools. Conversely, if parents who are very selective in their choice of school invest relatively little of their time in their children’s education, this may generate a downward bias of the school composition effect. Previous studies have either ignored this potential bias,5 or have tried to account for it by, fore example by instrumental variables techniques.6 However, credible instruments are notoriously difficult to find. In recent studies, endogeneity issues have been addressed using large panel datasets, which allow the authors to account for selectivity by including school-, grade- and studentlevel fixed effects (Hanushek et al. 2003; Betts and Zau 2004). When natural experiments or even true experimental data are available, there are further possibilities for avoiding selection bias.7 However, good IV strategies are rarely available. In the absence of credible instruments, researchers often use rich datasets in an attempt to overcome selection problems. Here, causality can be inferred only if the selection process is fully captured by observable variables. On this basis, the best way to deal with endogeneity issues of data such as mine is probably to control for the variables that are probably affecting the choice of school decision. The omitted variables bias can be significantly reduced by using a set of powerful controls affecting both student test scores and school choice. Hence, we need family and student background variables to control for differences in parental circumstances and tastes in education, and school factors that might influence school selection and learning. The rich information provided in the PISA dataset used in this study explicitly allows me to control for such factors, lending credibility to the identification approach used here.8 These data are especially apt for this strategy, because a main focus of the PISA study (as previously in other international achievement studies, e.g. the TIMSS) is the influence of the family on student learning. Parental 5 E.g. Levin (2001), Driessen (2002), Willms (1986),Betts and Morell (1999), Zimmer and Toma (2000). 6 See e.g. Feinstein and Symons (1999) and Robertson and Symons (2003). In an attempt to provide an alternative identification strategy for this study, I followed the approach in Feinstein and Symons (1999) and instrumented school composition by 15 local authority dummies. Although the instruments were strong predictors of school composition in the selection equation, a test of overidentifying restrictions clearly rejected the validity of the instruments (p = 0.018), rendering this identification strategy unfeasible for my study. 7 See e.g. Boozer and Cacciola (2001), Sacerdote (2001), Zimmerman (2003) and Hoxby (2000) for natural experiments; and Falk and Ichino (2006) for a true experiment in a slightly different context. 8 This identification approach can be viewed as a “linear matching” approach. As in the matching literature, identification hinges on controlling for all possible variables one thinks might affect selection of a school and the measurement of the outcome.
School composition effects
185
interest in the child’s education is regarded as one of the strongest predictors for achievement in schools, and is also likely to drive the choice of school. When parents interact well with their children they can offer encouragement, follow their children’s progress, and otherwise show their interest in how their children are doing in school. Thus, in addition to traditional variables like parental education, occupation, income and wealth and the family structure, composite variables such as parental academic interest, home educational resources, and cultural possessions are included in the regressions as controls. All these aspects of the home environment could be said to indicate an academic orientation which might influence both the choice of school and academic achievement. It is also likely that school inputs, such as teachers, are not assigned randomly across schools, and that the school composition serves as a proxy for school quality.9 Therefore, in preliminary analyses, I controlled for a broad range of school correlates. However, as many of these were rarely significant,10 only a selected subsample is included in the final regressions. As explained above in the definition of school composition effects, changes in school climate, teaching methods and teacher motivation are regarded in the literature as part of the effect of changing the socioeconomic composition of the student body, and thus are endogenous variables in the regressions. On the other hand, in a setting like this, where identification relies on an extensive set of controls, differences in school climate and teacher motivation may also capture parental selection of schools. As there might be some disagreement among scholars on whether or not to include information on school climate, teacher behaviour and so on, results from both specifications are estimated and discussed. I readily acknowledge that this set of controls, though rich, is probably still not exhaustive. However, what gives additional weight to my identification strategy is that Schneeweis and Winter-Ebmer (2007), who conduct a similar analysis on the Austrian subsample of the PISA 2000 data and address endogeneity by estimating within-schooltype fixed effects, find school composition effects of similar size.11 They argue that in Austria’s differentiated education system, self-selection is mainly based on the choice of school type, and not on the choice between schools of the same type; it can therefore be assumed that students and parents who have chosen a particular type of school share unobserved, as well as observed, characteristics. Thus, they argue, schooltype fixed-effects esti9 For example, Hanushek et al. (2004) show that teaching lower-achieving students is a strong factor in teacher mobility. Their results highlight the difficulty that schools with academically disadvantaged students have in retaining teachers. 10 This is confirmed by Schneeweis and Winter-Ebmer (2007) in their study on Austrian schools
using the Austrian subsample of the PISA 2000 data. They include a wide variety of school characteristics, but only two out of 12 are consistently significant. 11 They find positive and significant mean school composition effects of between 0.05 and 0.07 standard deviation (SD) for a one SD rise in the school composition variables, while my results suggest effect sizes of 0.05 and 0.08 SD, depending on the specification (models (2) and (3)). However, they measure school composition in a different way, by mean values of parental occupation and cultural communication. I replicated my analysis using their measures and obtained results very similar to those reported in the present study. Only the results for reading are fully comparable, as those for mathematics and science are pooled in Schneeweiss and Winter-Ebmer.
186
B. S. Rangvid Adminrecords
PISAdata
200
150
100
50
0
Fig. 1 Boxplot for student sample size in PISA schools: with and without additional data drawn from administrative registers
mation helps mitigate self-selection bias. The similarity between their results and mine therefore strengthens the credibility of my estimation strategy: I find similar results even though I am unable to correct for self-selection on unobservables in the way they can, since in Denmark, schools are not distinguished by different academic tracks. (There is some further discussion of Schneeweiss and Winter-Ebmer’s results in Sect. 5.) However, even though similar results in the literature strengthen confidence in the identification strategy, there may still be concern about remaining bias due to correlated effects, which means that any results must be interpreted with considerable caution. 3 Data I use data for Denmark from the OECD’s Programme for International Student Assessment (PISA). The first wave of the PISA survey was conducted in the year 2000 in 32 countries, testing reading, mathematical and scientific literacy. The PISA target population is 15-year-old students. The tests focus on the demonstration of knowledge and skills in a form that is relevant to everyday life, rather than how well students had mastered a specific curriculum. As well as performing cognitive tests, students answered a background questionnaire, and school principals completed a questionnaire about their schools. For a detailed description see OECD (2001, 2002a,b), which records the sampling procedure, design of the questionnaires, response rates at different stages of the survey, and international results. In Denmark the sample is stratified by school size (very small, small and big). Within this, schools were selected systematically, with probabilities proportional to size. Within each school, up to 28 students aged 15 years were selected randomly for participation;12 however, many had fewer participants. The average 12 The Danish maximum number of students per school sampled is different from that of most other PISA countries, which sampled up to 35 students per school.
School composition effects
187
number of students per school sampled in the Danish PISA subsample is only 19 (see Fig. 1). This is problematic in my study, as the accuracy of the variable of main interest (school composition), may be affected by the small number of observations. However, for this study the administrative registers from Statistics Denmark provided home background data13 for the full sample of students in the PISA schools from approximately the same age group (i.e. grades 8 and 9).14 Figure 1 shows boxplots for the distribution of the school sample size available for measuring school composition before and after drawing on register data. Including administrative records for all eighth- and nineth-graders at the schools substantially increases the sample sizes for the estimate from an average of 19 students per school to 78 (an increase by a factor of 4!). I use the full sample of schools and students to construct the school composition variables.15 In Sect. 4.2—as a robustness check—I provide additional results from an estimation which leaves out the (few) schools with fewer than 20 student observations in the eighth and nineth grades. Missing values are handled by including dummy variables (which are set to 1 if the value is missing for the observation) to control for missing values of explanatory variables in all regressions.16 The value for the explanatory variable is set to zero in these cases. Using Item Response Theory to compute the scores, PISA mapped performance in each subject on a scale standardized to an OECD average score of 500 points and a standard deviation of 100 points. The focus of the PISA 2000 study was reading literacy. For this reason, the assessment of mathematics and scientific literacy was more limited: the sample size for the reading test scores is twice that for mathematics and science (4,175, 2,350 and 2,314 for reading, mathematics, and science, respectively). The means and standard deviations of the test scores and other variables are presented in Table A1 in the appendix. Four variables were employed as statistical controls for student intake: parental (highest) education, student living with both parents, student ethnicity and 13 Student-level data containing information on parental education, family status, ethnicity and household income were available for this extended sample. 14 The reason I chose eighth and nineth graders is two-fold: first, the school code for the student’s school is available only for grades 8 to 10. Grade 10, however, is optional, and students attending grade 10 tend to be a highly selective sample. I therefore use only students attending grades 8 and 9 in the calculation of the relevant school-SES variable. Second, grades 8 and 9 also seem to be the relevant age group: 6 and 92% of the 15-year-old PISA students attend grades 8 and 9, respectively. 15 There is a minor pitfall when adding administrative data: for eight out of 225 schools, no data on eighth and nineth graders were available in the registers. Most of these schools are upper secondary schools, and, unfortunately, we do not observe school codes in the register for the first years at upper secondary level. Thus, for these eight schools, I cannot create the school composition variables, and I drop these observations from the sample (60 students, amounting to 1.4 percent of the sample). 16 Including dummy variables for missing values does not solve problems of biases in response. The highest parents’ occupational status was missing for 6% of the students, and data on highest parental education were missing for 4% of the students. School background variables were missing for 4%. I analysed the mean characteristics of students with missing values for father’s and/or mother’s education, and found that they come from more disadvantaged backgrounds and score lower in the achievement tests.
188
B. S. Rangvid
parental income. These are important determinants of student achievement in the literature of school composition and peer effects. Shavit and Williams (1985) argue that some studies failed to observe a statistically significant effect because there was little variation in the socioeconomic contexts of the schools. In my data, mean parental education in the PISA schools varies between 9.7 and 15.4 years of schooling, the share of minority students varies between 0 and 100%,17 the share of students living with both natural parents varies between 6 and 100%,18 and the average parental income is between 110,000 DKK and 656,000 DKK (see Table A1 in the appendix). Thus, little variation in the mean school composition measure is clearly not a concern. However, the student’s socioeconomic background is only one dimension. Another is the heterogeneity of the student intake. Schools catering for a diverse student body might either profit from it, or suffer from it, because it can be difficult to deal with students with a wide range of abilities. The variation of the measure for “variation in student background” (measured as the standard deviation of peer backgrounds) is smaller than variation in average peer background, which makes it more difficult to detect significant effects on test scores (see Fig. A1). In preliminary analyses, I included the four variables measuring school intake both separately and jointly.19 However, including them jointly suggested, unsurprisingly, a high degree of correlation between them. The pairwise correlation coefficients range between 0.14 and 0.78, and are all highly significant. Also, to aid the interpretation of the results, it is helpful to have a one-dimensional index of student quality at each school. This was created by calculating a standardized SES variable, which is the first principal component (Jolliffe 1986) of the four indicators of SES.20 To allow for the variation in how the student mix affects student outcomes, I include the standard deviation of the school composition measure within the school as a measure of heterogeneity along with the mean. As well as the usual set of parental background characteristics (e.g. education, wealth, family patterns; see Table A1 for the full set of controls), there are several questions in the PISA student questionnaire about parental involvement and interest. The responses are summarized in three indices.21 Parental 17 The extremum of 100% minority students is due to small samples in some schools. For schools with at least 20 students in the eighth and nineth grades, the highest share is 93%. In the section on sensitivity I test for the robustness of the results when I exclude these (few) schools with very few students in the sample. 18 Again, the minimum of 6% of students living with both parents is because some schools had very few students in the sample. For schools with at least 20 students in the eighth and nineth grades, the lowest share is 15%. 19 I report the results of these estimations in the section on sensitivity checks. 20 The weights in the first principal component of the four SES variables are: parental education
(0.50), parental income (0.65), living with both parents (0.50), native background (0.29). The first principal component accounts for 38% of the variance in the set of four variables. This is an amount similar to other studies which use PCA analysis on student level data to create a one-dimensional school composition variable (e.g. Willms 1986). 21 See the Data-annex, or OECD (2001), Annex A1, for details. In preliminary analyses I also included the index for social communication, but as the coefficient estimates were rarely significant, this composite is not included in the final specification.
189
School composition effects
academic interest22 measures the academic and, more broadly, cultural competence in the student’s home. Importantly, it includes whether parents engage in these activities with their children. Home cultural possessions such as classical literature, poetry and works of art have frequently been shown to be related to educational success. Home educational resources, such as a desk for studying, textbooks and computers, focus on resources that are directly useful for the student’s schoolwork. A questionnaire for principals provided information on characteristics of the school. The set of characteristics included in the regressions has information on school size (entered as a quadratic function); the student-teaching staff ratio; the index of school autonomy;23 the share of teachers of Danish/ mathematics/science at the school holding a tertiary level degree in the respective subject; and a dummy indicating whether or not the school uses standardised tests to assess the academic skills of 15-year-olds.24 Additional information on school climate and teacher motivation and behaviour is shown by four composite variables: teacher behaviours, teacher morale, teacher support and teacherstudent relations (see the Data Annex on the sets of single items included in the calculation of each composite). 4 Method 4.1 Estimation All regressions are of a standard education production function form.25 Aij = βXij + γ Sj + δSCj + εij
(1)
22 This variable has been renamed “cultural communication” in the final release of the PISA dataset. 23 The variable of school autonomy is derived from the number of categories that principals classified as not being a school responsibility. The scale was then inverted so that high values indicate a high degree of autonomy. 24 Other controls (hours of instruction in the test subject, the school’s physical infrastructure and educational resources, the proportion of fully certified teachers, the proportion of teachers having attended a programme of professional development during the past 3 months, and a dummy variable indicating private schools) were included in preliminary analyses, but their coefficient estimates were rarely significant. I have excluded many of these from the final regressions. One variable that it would be useful to include in the education production function model is school expenditure. In some school systems, the allocation of resources between schools with varying school intakes is important. Unfortunately, this information is not available in the PISA database. In the Danish case, however, these problems seem less severe, as schools are compensated financially for accommodating pupils with learning disabilities and bilingual students. 25 Ideally, to assess the effect of the school composition on student achievement, the students’ intrinsic ability or previous achievement when they entered school should be taken into account. However, no such data are available for this study, as the PISA data are cross-section data, and no external information on previous achievement or, for example IQ tests, is available for my sample, since the only national assessment of students in Denmark is administered at the end of compulsory schooling (i.e. after the PISA test is taken).
190
B. S. Rangvid
Observable characteristics, X, include information on parental education, occupation, income and wealth, home educational resources, parental academic interest and cultural possessions, plus dummy variables for gender, ethnicity, grade attended, and whether the student is living with both natural parents.26 Information about school characteristics, S, includes a quadratic function of school enrolment, students-per-teacher ratio in the school,27 the proportion of Danish/mathematics/science teachers at the school with a college degree,28 an index of school autonomy, and whether standardized tests are used for 15-yearolds at the school. Information on school composition, SC, includes the mean and variance of the socioeconomic school intake measured by a composite variable calculated by principal component analysis on the basis of four factors: parental education, if the student is living with both parents, ethnicity and parental income.29 If the composition of the student’s school is randomly assigned, or at least not systematically correlated with unobserved factors influencing both the choice of school and academic achievement, then Eq. (1) provides an unbiased estimate of the school composition effect, δ. Identification issues are discussed at length in Sect. 2. For each test domain (reading, mathematics, and science), three model specifications are estimated: the first, model (1), includes only student characteristics and family background controls; model (2) adds school characteristics, and model (3) has additional information on the learning climate and teacher behaviours at the school. I begin the analysis by estimating the models by ordinary least squares methods, which estimate effects at the centre of the conditional test score distribution. However, as I am explicitly interested in the possibility that the composition of the school may differentially affect students of different ability levels, I re-estimate the models by quantile regression techniques. Several recent papers have similarly used quantile regressions to learn about points in the distribution of the dependent variable beyond the mean.30 The basic quantile regression model specifies the conditional quantile as a linear function of covariates. For the θ th quantile, a common way to write the model (see, e.g. Buchinsky 1998) is yi = xi βθ +uθi ,
Quantθ (yi |xi ) = xi βθ
θ ∈ (0, 1)
(2)
26 Further controls such as students’ age and the number and birth order of siblings were included in preliminary analyses, but their coefficient estimates were insignificant in almost all regressions. They have been excluded from the final models so as not to reduce the power of estimating the remaining coefficients. 27 The student-per-teacher ratio is often preferred to class size as a control, because class size might be biased by within-school allocation of difficult to teach students to smaller classes. 28 Only the share of Danish teachers is included in the reading test score regressions, and similarly for mathematics and science. 29 See data section for details on the composite school SES measure. 30 Examples of such work include Buchinsky (1994), Eide and Showalter (1998), Green and Riddell
(2003), Bedard (2003), and the special issue of Empirical Economics (Fitzenberger et al. 2001).
191
School composition effects
where Quantθ (yi |xi ) denotes the quantile of yi , conditional on the regressor vector xi . The distribution of the error term is left unspecified. It is assumed only that uθi satisfies the quantile restriction Quantθ (uθi |xi ) = 0. The important feature of this framework is that the marginal effects of the covariates, given by βθ , may differ over quantiles. 5 Results 5.1 Do school composition effects exist? The main results of the OLS estimations for all three test domains are shown in Table 1. Only the coefficients of interest are reported in the table: full results Table 1 Ordinary least squares results for reading, math and science literacy Model 0
Model 1
Model 2
Model 3
Coef. Robust se Coef. Robust se Coef. Robust se Coef. Robust se Reading literacy Mean school SES Variation school SES Student background controls School controls Learning environment No. observations Adj. R-squared Math literacy Mean school SES Variation school SES Student background controls School controls Learning environment No. observations Adj. R-squared Science literacy Mean school SES Variation school SES Student background controls School controls Learning environment No. observations Adj. R-squared
54.13 7.49 10.13 9.57
4,175 0.07 50.34 6.24 1.86 11.95
2,350 0.07 52.6 3.69
6.65 11.84
2,314 0.06
18.91 5.57 7.8 7.34
17.36 5.20 4.01 7.01
12.21 4.92 5.24 6.58
x
x x
4,175 0.29
4,175 0.30
x x x 4175 0.31
21.89 5.67 −3.46 9.24
20.01 5.57 −7.44 7.67
16.95 5.52 −6.06 7.75
x
x x
2,350 0.23
2,350 0.24
x x x 2,350 0.25
19.15 5.99 5.85 9.57
17.38 5.99 0.25 9.37
15.95 5.95 1.63 9.09
x
x x
2,314 0.22
2,314 0.23
x x x 2,314 0.23
Student background controls include: gender, grade, ethnicity, nuclear family, parental education, occupation, income, and wealth, home educational resources, parental academic interest, cultural possessions. School controls are: a quadratic function of school enrolment, student/teacher ratio, proportion of reading/math/science teachers proportion of reading/math/science teachers educated in the named subject, use of standardised tests (0/1), index of the school autonomy. Learning environment characteristics include: teacher support, teacher morale, teacher behaviour and studentteacher relations. Dummy variables for missing values are included in the regressions
192
B. S. Rangvid
are provided in the appendix (Table A2). To adjust for the stratified sampling design, student weights are used in the regressions.31 Moreover, standard errors are adjusted to account for the fact that students are clustered within schools. I also calculated confidence intervals by bootstrapping (1,000 repetitions), where both the principal component equation and the achievement equation are included in each bootstrap repetition to take account of the additional estimation error in the school-SES variables introduced by the first stage principal component analysis.32 The confidence intervals for the school-SES estimates were slightly larger when estimated with bootstraps (8% for reading and 11 and 13% for mathematics and science), but the results are still highly significant. As explained earlier, three different model specifications are estimated for each of the three test subjects: model (1) has only student and family background controls in addition to the school-SES variable, model (2) includes school characteristics, and model (3) adds controls on the learning environment and teacher behaviours. In addition, results from a model without controls are reported (Model 0). The estimated production function models of student test scores produce coefficients and R-squared values that are consistent with those in the literature.33 Interestingly, the models are able to explain less of the variance in mathematics and science scores than reading scores.34 When comparing the sizes of R-squared across models (0) to (3), we see that this is because parental characteristics explain reading scores better than they explain mathematics and science scores.35 The additional variance explained when school and teacher factors are added to the model is very small. Our primary interest is in the coefficients on the school-SES variables. The results for the school composition variables show that only the coefficients for the level are (positive and) significant, while the coefficients on the variation in student background at a school are imprecisely estimated, with standard errors at least as high as the coefficients (and the coefficients are different from zero).36 Thus, attending a school with higher SES schoolmates is associated with higher
31 In the PISA sample design, schools are the primary sampling unit. Schools were sampled systematically, with probabilities proportional to size. I use the so-called “student final weight” provided in the PISA dataset. This weight, Wij , for student j in school i, is calculated to take account of the probability of inclusion of school i in the sample, the probability of selecting student j from within the selected school i, and some deficiencies in the sampling design. See OECD 2002b, p. 51, for details. 32 I thank Bernd Fitzenberger for this suggestion. 33 However, values of R-squared for models containing prior achievement are typically higher in
the literature. 34 A similar and even more pronounced result is also found by Robertson and Symons (2003), who
are able to explain less than half the variance in mathematics scores compared to reading scores. 35 This greater importance of the students’ home backgrounds for reading performance might be
interpreted as evidence of lower intergenerational mobility in reading skills than in mathematics and science literacy. 36 This might be partly because that there is less variation in the “variation of student background” variable than in the “mean school composition” variable - see the discussion in the data section.
School composition effects
193
test scores in reading, mathematics and science. There are only small differences in school composition estimates across test subjects. The estimates of models (2) and (3) in Table 1 indicate that a one-standard deviation increase in school-SES (0.44) would raise test scores by about 0.05–0.09 standard deviation, depending on the subject and specification.37 I also briefly consider the coefficients of individual variables, which are reported in the full results in Table A2 in the appendix. Most of the estimated coefficients of the student characteristic and family background variables are statistically significant, and are in the direction one might expect a priori. For example, controlling for other factors, the coefficients on parental education and occupation, parental academic interest, educational resources in the home, cultural possessions (not significant for mathematics), being a native Dane and living with both parents (not significant for science) are positive and significant—results, which are typical in the literature. Students who attend lower grades than the modal for 15-year-olds score lower in all three domains. Female students have higher predicted reading scores than male, but lower scores for mathematics and science. Counter-intuitively, greater wealth as measured in the PISA dataset is associated with lower reading scores, while the (negative) coefficient on wealth is not significantly different from zero for mathematics and science.38 Comparing the school composition estimates across models, I find that adding student and family background controls substantially reduces them, while including school characteristics (model 2) has only a small effect. However, adding school climate and teacher controls (model 3) reduces the point estimates for reading scores by 30%, and by 15% for mathematics. Science results are only marginally affected.39 Even though few individual estimates of school factors and the learning environment are individually significant (see Table A2),
37 Similar results are obtained when heterogenous effects are allowed for by subsample regression on different quartiles by a student’s own SES. Effects are sizable and significant for the two lower quartiles, but much smaller and insignificant for the two upper quartiles, suggesting that students from low-SES homes would profit from attending schools with students from higher-SES homes, while the effect for higher-than-average-SES students is not significantly different from zero. 38 Additional analyses of the wealth variable were conducted, including answers to the nine separate questions from which the wealth composite is created [see OECD (2001) for details]. The results suggest the wealth composite does not measure only “wealth”, but also the need to buy educational software (i.e. parents of low achievers buy educational software to support learning), and preferences for owning several television sets and cars, which, in turn, might capture “nonpreferences” for academics rather than wealth. These results suggest that the wealth composite variable does not measure only pure “wealth”, but also other conditions which affect test scores (negatively). Although these analyses have helped uncover the underlying factors behind a counterintuitive negative sign for the wealth estimate, the results cannot indicate why they affect reading, mathematics and science differently. However, further investigation along this line is beyond the scope of this study. 39 However, as will be seen in Sect. 4.3, including school characteristics has different effects on the school-SES estimates at different points of the test score distribution. OLS results provide only a snapshot from the centre of the distribution.
194
B. S. Rangvid
both the school factors added in model (2) and the information on the learning environment entered in model (3) are jointly highly significant.40 Of particular relevance to this study using observational data is the comparison of my results with findings from analyses employing experimental or quasi-experimental data, which are generally regarded as more reliable and credible for studies of selection issues. Hanushek et al. (2003) estimate the effect of the peer composition (measured by schoolmates’ lagged test scores) on students’ test scores in mathematics. They control for the most important determinants of achievement that confound peer estimates by removing student and school-by-grade fixed effects as well as observable family and school characteristics. Their results are similar to the results of the present study: that students (regardless of their initial position in the school test scores distribution) appear to benefit from having higher-achieving schoolmates, while the variance in achievement seems to have no systematic effect. Hoxby (2000) uses changes in the gender and racial composition of a particular grade in a school in successive years to identify peer effects. She finds that an exogenous change of 1 point in peers’ reading scores raises a student’s own score between 0.15 and 0.4 points, depending on the specification. In a somewhat different setting, Zimmerman (2003) implements a quasi-experimental empirical strategy to measure the effect of room-mates on academic outcomes. He finds small but statistically significant effects. Two observations summarize the main results for the estimations by OLS. First, the level of SES at the school appears to be significantly related to test scores in all three subjects, but there is no conclusive evidence as to whether the heterogeneity of the student body affects test scores. Second, the effect of school composition is of similar magnitude for each test subject, though slightly smaller for reading skills when information on the learning environment is entered in model 3. 5.2 Heterogeneous effects: quantile regression estimates The OLS findings have focused on how school composition affects average performance. However, if a change in composition has different effects on lowand high-ability students, quantile regressions results will differ over the conditional test score distribution.41,42 To examine this hypothesis, I re-estimate 40 As mentioned in the data section, I have already excluded some school controls with insignificant coefficients. I refrain from excluding further controls, because although these coefficients are insignificant predictors of test scores, excluding them increases the school-SES estimate. 41 As Woessmann (2004) puts it: “Student ability itself remains unmeasured, virtually by definition. However, once family background effects are controlled for, the conditional performance distribution should be strongly correlated with ability (or, more precisely, with that part of ability that is not correlated with family background).” 42 The conditional test score distribution reflects the remaining variation in test scores after controlling for observable covariates. This remaining variation can be related to several dimensions of ability, and there may be additional randomness.
195
School composition effects MATH scores
READING scores
SCIENCE scores
30
30
30
20
20
20
10
10
10
0
0
0 .05
.15
.25
.35
.45
.55
.65
. 75
.85
.95
Model (1): background Student background Model (1): Student (STUD) (STUD) Model (2): STUD & school characteristics (SCH) Model (2): STUD & school characteristics Model (3): (SSTUD,SCH & school climate/teacher motivation
.05
.15
.25
.35
.45
.55
.65
.75
.85
.95
.05
.15
.25
.35
.45
.55
.65
.75
.85
Model (1): Student background (STUD)
Model (1): Student background (STUD)
Model (2): STUD & school characteristics (SCH) Model (3): STUD,SCH & school climate/teacher motivation
Model (2): STUD & school characteristics (SCH)
.95
Fig. 2 Quantile regression results for reading, math and science literacy
the models by quantile regression methods, using quantiles ranging from 0.05 to 0.95 at each 0.05. Full results are presented in Tables A3a–c.43 As for the OLS estimation above, in addition to a conventional single-equation bootstrap procedure, I calculated confidence intervals using a bootstrap procedure which also includes the first-stage principal component analysis. These results are reported for the coefficients of the main parameter of interest to this study in Fig. 2 and Table A4. As above, estimates of the degree of heterogeneity of the student composition are imprecisely estimated throughout, and are not shown in graphic form, but are reported in Tables A3a–c. As shown above, changes in learning climate and teacher behaviours are regarded in the literature as part of the effect of changing socioeconomic composition, and can therefore be seen as endogenous variables in the estimations. On the other hand, in a setting like this, where identification relies on including an extensive set of controls, such differences may also capture parental selection of schools. Both specifications are therefore estimated and their results discussed. As before, three different models are estimated for each test score domain, one including only student background characteristics, another adding a range of school characteristics, and the last also adding variables of the learning climate and teacher behaviours. Estimation by OLS in the previous section revealed positive and significant effects of the student composition on students’ test scores at the centre of the distribution for all three subjects. However, quantile regression results for reading scores suggest that school composition effects are decreasing over the conditional test score distribution (Fig. 2, left panel). The results reveal positive and significant effects at the lower end of the test score distribution (significance at the 5% level is indicated by solid circles); the magnitude of the effect decreases and the estimates eventually become insignificant at the (very) upper end of the distribution. As can be seen, compared to models containing only student background (model 1) or student background plus school characteristics (model 2), the estimates are somewhat smaller when controls on learning climate and teacher behaviours are added (model 3), and are significant only for the lower half of the test score distribution (except for 43 In order to convey a sense of the effect of student and school inputs across the entire conditional achievement distribution without reporting dozens of regressions, only results for the 10th, 25th, 50th, 75th and 90th percentiles are reported in the full results table.
196
B. S. Rangvid
the very lower end, where the estimates are also insignificant). As explained in Sect. 2, the decrease in the estimated school composition effect when learning climate and teacher behaviours controls are added suggests that part of it is due to a change in the learning environment at school or classroom level, rather than a direct peer effect. The results for science exhibit a similar decreasing pattern of the school-SES effect (Fig. 2, right panel), but are significant only over a minor range of the lower part of the test score distribution for models (1) and (2). When controls on learning climate and teacher behaviours are added, the point estimates retain their decreasing pattern, but the estimated effects are insignificant over almost all of the test score distribution, suggesting that the observed school composition effect is almost entirely due to changes in the learning environment. Thus, for science, no remaining variation giving scope for direct peer interactions is found. To sum up, for reading, and to some degree also for science, I find that higher school-SES raises test scores at the lower end of the achievement distribution, while students at the (very) upper end are unaffected. The results for mathematics, however, are different (Fig. 2, centre panel). Although a decreasing trend is apparent over the lower range of the distribution (about the 15th to the 65th percentiles), the school-SES effects increase again for higher percentiles. Thus the pattern is U-shaped, suggesting that school-SES matters for high- and low-ability students alike. As with reading test scores, adding learning climate and teacher behaviours controls reduces the estimated school-SES effect (especially at the lower end of the distribution) for mathematics scores. This suggests that part of the effects at the lower end of the mathematics score distribution is probably due to differences in the learning climate and teacher behaviours typical of schools attended by low-ability students. However, in spite of the decrease in magnitude, the estimates stay significant over most of the mathematics score distribution. The only study I am aware of which estimates school composition effects similar to those in the present study using quantile regression is by Schneeweis and Winter-Ebmer (2007).44 As explained in Sect. 2, they use the same data source as I, but for a different subsample (Austria rather than Denmark). Comparing our studies it becomes evident that, in spite of some differences in the empirical analysis,45 we have had similar results. In particular, their quantile regression results display a similar decreasing pattern over the test score distribution for the school composition measure, which in spirit is closest to mine (based on the socioeconomic index of occupational status), having a maximum effect size of about 0.11 SD at the lower end of the reading test score distribution (15th 44 Levin (2001) also uses quantile regression techniques to estimate peer effects. However, his estimates are based on measures of students’ IQ, and the kind of peer effects he estimates are not standard, but specific (the number of students in the class with similar IQ). 45 In particular, they include school-type fixed effects in their estimations, which is not possible for Denmark. Moreover, they use the socioeconomic index of occupational status available from the PISA data to measure the socioeconomic status of schoolmates.
School composition effects
197
percentile) versus 0.08 and 0.10 SD at this percentile in my study, depending on the specification (models 2 and 3).46 To sum up, the results suggest that for reading literacy, school-SES effects are positive and significant at the lower end of the test score distribution (except for the extreme lower end). These effects decrease along the distribution and finally turn insignificant at the (very) upper end of the distribution. This pattern of results suggests that: (i) weak students’ achievement might be enhanced by a more equal mix of students across schools, and (ii) mixing helps the weak students more than it hurts the best students, which would imply an increase in average achievement. However, it is important to recognise that school-SES affects students’ reading scores also for most of the upper half of the test score distribution (yet, this is not true for models, where learning climate and teacher behaviours are added), albeit to a lesser extent than at the lower end, and thus a mixing policy implying a less favourable school composition for these students could hurt these students’ reading achievement. However, even if mixing would hurt the best students, helping weak students might be a primary policy focus (equity concern). For math literacy, only (i) applies, while (ii) does not, because the effects are more or less symmetrical around the median of the distribution. Therefore, we should not expect average outcomes to be significantly different if there were more mixing of socioeconomic groups; but if concerns about equity prevail, mixing could produce a more desirable outcome. And last, the effects for science are at best marginally significant. The overall conclusion is that when equity is the primary policy objective, a more equal mix of students across schools would be preferable, helping weak students to increase their reading and mathematics skills. However, average scores are predicted to increase only for reading. 5.3 Are the results robust? I have explored the sensitivity of the results to changes in the specification of the model along various lines. First, I dropped students at schools with fewer than 20 students in the eighth and nineth grades, to further improve the accuracy of my school-SES measure. This meant dropping 1.5% of the sample, which decreased the point estimates of the school-SES variables slightly but did not change the magnitude, significance or sign of the school-SES estimates. Second, I investigated how each of the four SES variables used for the principal components analysis performed when included separately in the regressions. Thus, instead of using the school-SES variable created by principal components analysis, I included the school-mean of length of the parents’ schooling, the share of minority students, the percentage of students living with both parents, and the average parental income in the household separately as measures of
46 However, using the school composition measure based on cultural communication yields homogenous effects across the test score distribution (Schneeweis and Winter-Ebmer 2007).
198
B. S. Rangvid
school composition in four different regressions.47 The results are as might be expected: higher average parental education is significantly positive-related to test scores in all subjects, as is average parental income, the percentage of students living with both parents, and the share of native Danish students. Third, I investigated the linearity of the school-SES effect. The purpose of doing so is to test for the concavity in the relationship between school composition variables and achievement, given that some authors have found evidence of non-linearities (e.g. Zimmer and Toma 2000; McEwan 2003). When I included the (mean) school-SES measure as a quadratic function in the estimations, the school-SES coefficients change only a little and remain highly significant throughout.48 The estimates for the squares are negatively signed, but only the estimate for science is significant at the 5% level, while the estimates for reading and mathematics are only marginally significant. As there is some evidence of non-linearities in the OLS results, I also estimated the quantile regression models, including square terms for school-SES, but they were seldom significant.49 I therefore chose to stay with the linear formulation of the estimation equation. Fourth, I investigated whether the school-SES estimates differ by gender. If so, changes in the school composition would translate into changes in the gender gap, which is important in the Danish data: girls are far better readers than boys, by one quarter of a standard deviation of reading scores, while boys outperform girls in mathematics and science by 15/12% of a standard deviation of mathematics/science scores. To investigate the possibility of heterogeneous gender effects, I estimated the model including (i) an interaction term between gender and the mean school-SES variable, and (ii) separate sub-samples for boys and girls, to allow all coefficients to differ by gender. However, I found no systematic differences in the school composition effects by gender. Overall, it seems that my results are qualitatively insensitive to alternative specifications of the model. 6 Discussion and conclusion The potential effect of school composition on student outcomes is a concern to policy-makers in many countries. The present study analyses the patterns of the school composition effect for students of differing abilities across three main subjects (reading, mathematics and science). Using the Danish subsample of the international PISA data (Programme for International Student Assessment), I attempt to reduce bias which is due to endogenous school choice by controlling for a comprehensive set of covariates. In addition, drawing on family background data from administrative registers for all same-aged schoolmates of 47 Including them jointly shows that they are highly significant as a group. However, the attempt to interpret any individual coefficient is confounded by its interaction with the other school characteristics. 48 I also experimented with including square terms for school heterogeneity, but the coefficients for the square terms were never significant. 49 Only in three out of 15 estimations was the square term was significant.
School composition effects
199
the PISA participants allows more accurate measurement of school-SES than possible with PISA data alone. The results shed light on the questions raised in the Introduction. (i) For the average student, the effects of schoolmates’ average socioeconomic background on reading, mathematics and science scores are positive and quite sizeable, although the heterogeneity of the student population at the school is imprecisely estimated. These results suggest important average effects of the socioeconomic school composition on test scores, although the results cannot provide conclusive evidence as to whether students would be (detrimentally) affected by a more diverse student body, which would be one consequence of policies that promote more mixing. (ii) The largest reading test score gains are for the students in the lower quantiles, while mathematics results suggest that high- and low-ability students benefit about equally from attending schools with a better student intake. Science results are at best marginally significant. Thus, for reading, school composition effects are stronger at the lower end of the conditional test score distribution, confirming speculation in the literature that low achievers are dependent learners, while high achievers are more independent. However, this differential effect is not present for mathematics and science. Additional results reveal that part of the school composition effect seems to work through a change in the learning environment (school and teaching climate, and teacher behaviour). All in all, these results suggest that a more equal mix of students across schools could boost average test score results and enhance equity of achievement for reading. For mathematics, average outcomes would not be significantly affected by more mixing of socio-economic groups because although there are test score gains at the lower end of the distribution, this is at the cost of a detrimental effect on the best students. Nevertheless, more mixing would produce more equity in mathematics, which might be a motive for reducing socio-economic segregation in schools. Science results are probably not affected. Changing the student mix across schools is only one of several policies designed to enhance student outcomes, and it is therefore relevant to compare the size of the estimated school-SES effect to effect-sizes of other interventions, such as teacher quality and class size. In their study on the effects of teacher quality, Rivkin et al. (2005) find that an increase of one standard deviation in average teacher quality raises average student achievement by about 0.10 SD in reading and mathematics test scores, which is somewhat larger than the estimated effects in this present study (a 0.05–0.08 SD increase in test scores for a one SD increase in school-SES). Re-examining results from the STAR experiment, where students were randomly assigned to a small (13–17 students) or a regular (22–25 students) class; a reduction amounting to about 2 SD in my data,50 Krueger (1999) found test scores for students in small classes rose by 50 As there are only two different class sizes in their sample, a standard deviation cannot be calculated to compare the results from the two studies. However, a class size difference of eight or nine students (between their small and regular-size classes) in my dataset amounts to about 2 SD.
200
B. S. Rangvid
about 0.22 SD. This comparison with other policies to enhance test scores suggests that changing the student mix in schools could be an similarly effective means of promoting student achievement as other measures. However, one important qualification should be borne in mind while considering the estimated school composition effects from the present study. As discussed above, there is concern about the remaining selection bias in the estimates. Therefore, to the extent that the estimated school composition effects reflect remaining unobserved characteristics (at the family, school, or neighbourhood level) or economic conditions affecting both the school composition and the dependent variable, it is not certain that policy-makers can really influence attainment just by just altering the student mix. Thus the results do not provide cast-iron proof of school composition effects. However, although the estimated effects might stem from the student composition at schools as well as from remaining factors, the existence of school composition effects cannot be ruled out. Data appendix Parental academic interest The PISA index of parental academic interest was derived from students’ reports on the frequency with which their parents engaged with them in the following activities: discussing political or social issues; discussing books, films or television programmes; and listening to classical music. This composite has in the final version of the international PISA dataset been renamed as cultural communication. Social communications The PISA index of social communication was derived from students’ resports on the frequency with which their parents engaged with them in the following activities: discussing how well they are doing at school; eating the main meal with them around a table; and spending time simply talking with them. Home cultural possessions The PISA index of possessions related to “classical” culture in the family home was derived from students’ reports on the availability of the following items in their home: classical literature, books of poetry and works of art. Home educational resources The PISA index of home educational resources was derived from students’ reports on the availability and number of the following items in their home: a dictionary, a quiet place to study, a desk for study, textbooks and calculators.
201
School composition effects Table A1 Summary statistics Variables
Obs
School composition: main measure Mean school composition (SES) 4175 Variation school composition (SES) 4175 School composition: alternative specifications Mean parental education in (years of schooling) 4175 Percentage students living with both natural parents 4175 Percentage immigrant students in school 4175 Mean parental income in school (Yearly income; ‘000DKK) 4175 Student background Female student 4175 Native student 4156 Attendants grade 8 (model grade for 15 year olds = grade 9) 4087 Lives with both natural parents 4132 Highest parental education 4028 (Highest) parental occupation (PISAindex) 3916 Parental wealth (PISA index) 4164 Parental income (yearly income: ‘000DK) 4059 Home educational resources (PISAindex) 4163 Parental academic interest (PISAINDEX) 4114 Cultural possessions (PISA-index) 4114 School charecteristics Proportion of specialized Danish teachers 4147 Proportion of specialized Math teachers 4154 Proportion of specialized Science teachers 4154 Student/teacher ratio in school 4157 School size (enrolment) 3861 School uses standardised tests for assessments 3996 School autonomy (PISA index) 4174 School and teaching climate Teacher support (PISA index) 4144 Teacher behavior (PISA index) 4174 Teacher morale (PISA index) 4174 Student–teacher relationship (PISAindex) 4141 Sample size No. of students No. of schools
Mean
SD
Min
Max
−0.040 1.124
0.439 0.248
−1.936 0.455
1.293 3.307
12.371
0.946
9.667
15.446
0.339
0.113
0.063
1.000
0.064
0.104
0.000
1.000
327.962
73.226
109.644
656.056
0.494 0.930
0.500 0.250
0.000 0.000
1.000 1.000
0.060 0.710 12.940
0.240 0.454 2.390
0.000 0.000 0.000
1.000 1.000 15.000
49.458 0.610
16.338 0.600
16.000 0.000
90.000 3.380
484.190
252.600
0.000
3594.940
−0.219
0.933
−4.150
0.760
0.108 −0.124
0.978 0.966
−2.200 −1.650
2.270 1.150
0.529
0.299
0.000
1.000
0.506
0.289
0.000
1.000
0.693 10.320 405.972
0.362 4.410 216.222
0.000 0.220 3.000
1.000 17.730 1195.000
0.908 0.128
0.289 0.625
0.000 −1.110
1.000 1.720
0.170 −0.810 0.023
0.840 0.880 0.830
−3.030 −2.410 −3.400
1.950 1.220 1.780
0.300
1.000
−2.900
2.830
4175 207
To capture information on wealth, home education resources, parental academic interest, social communication, and cultural possessions, OECD has created composite variables summarizing data from several questions in the student questionnaire
202
B. S. Rangvid
Teacher behaviours The PISA index of ’teacher behaviours (or ‘principals perceptions of teacherrelated factors affecting school climate’) was derived from principals’ reports on the extent to which the learning by 15-year-olds was hindered by: low expectations of teachers; poor student-teacher relations; teachers not meeting individual students’ needs; teacher absenteeism; staff resisting change; teachers being too strict with students; and students not being encouraged to achieve their full potential. Lower index values indicate a poorer disciplinary climate. Teacher morale The PISA index of ’teacher morale (principals’ perception of teachers’ morale and commitment’) was derived from the extent to which school principals agreed with the following statements: the morale of the teachers in this school is high; teachers work with enthousiasm; teachers take pride in this school; and teachers value academic achievement. Teacher support The PISA-index of teacher support was derived from students’ reports on the frequency with which: the teacher shows an interest in every student’s learning; the teacher gives students an opportunity to express opinions; the teacher helps students with their work; the teacher continues teaching until the students understand; the teacher does a lot to help students; and, the teacher helps students with their learning. Teacher–student relations The PISA-index of teacher–student relations was derived from student’s reports on their level of agreement with the following statements: students get along well with most teachers; most teachers are interested in students’ well-being; most of my teachers really listen to what I have to say; if I need extra help, I will receive it from my teachers; and most of my teachers treat me fairly. Source: OECD (2001) .26
Fraction
Fraction
.26
0
0 -2
Mean
2.1
-2
Standard deviation
2.1
Fig. A1 The empirical distribution of the two school composition variables (mean and standard deviation of peer background at school)
18.91 7.80
5.57 7.34
17.36 4.01
Coef.
Coef.
se
Model 2
Model 1
5.20 7.01
se 12.21 5.24
Coef.
Model 3
4.92 6.58
se 21.89 −3.46
Coef.
Model 1
5.67 9.24
se
Math scores
20.01 −7.44
Coef.
Model 2
5.57 7.67
se 16.95 −6.06
Coef.
Model 3
5.52 7.76
se 19.70 4.64
Coef.
Model 1
5.99 9.14
se
Science scores
18.19 −0.32
Coef.
Model 2
5.90 9.02
se
15.95 1.63
Coef.
Model 3
5.95 9.09
se
Dummy variables for missing values are included in the regressions
Female student 18.91 2.73 19.19 2.75 17.59 2.79 −16.74 3.72 −16.61 3.59 −18.20 3.55 −16.81 3.92 −17.33 3.90 −18.65 3.89 Native student 39.25 6.90 39.08 6.91 39.65 6.70 29.68 7.93 29.84 8.09 28.37 8.10 43.99 10.08 43.19 10.08 44.17 10.14 Attends grade 8 (modal grade = grade 9) −48.08 6.49 −49.28 6.19 −50.25 6.03 −41.16 6.81 −41.61 6.36 −43.95 6.48 −34.33 8.22 −34.96 7.89 −33.06 7.81 Lives with both natural parents 10.51 3.43 8.98 3.33 7.64 3.35 11.24 4.14 10.49 4.12 9.63 4.24 7.58 4.59 5.99 4.49 5.37 4.53 Highest parental education 6.28 0.64 6.20 0.67 6.17 0.63 3.56 0.88 3.50 0.93 3.52 0.92 6.34 0.95 6.46 0.95 6.22 0.92 Highest parental occupation 0.61 0.11 0.59 0.11 0.54 0.11 0.62 0.13 0.62 0.13 0.57 0.13 0.56 0.26 0.53 0.16 0.52 0.16 Parental wealth (index) −8.57 2.06 −8.70 2.10 −8.52 2.09 −1.74 2.59 −2.70 2.61 −2.11 2.58 −3.26 3.16 −3.43 3.23 −3.28 3.13 Parental income (100,000 DKK/year) 0.01 0.01 0.01 0.01 0.01 0.01 0.00 0.01 0.00 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 Home educational resources 6.08 1.70 5.88 1.59 4.86 1.57 6.01 1.89 5.87 1.89 5.36 1.89 5.45 2.65 5.04 2.53 4.23 2.51 Parental academic interest 17.48 1.74 16.97 1.70 16.11 1.49 13.62 2.25 13.53 2.27 11.41 2.10 17.08 2.33 16.25 2.29 15.58 2.32 Cultural possessions 4.35 1.84 5.36 1.83 5.25 1.79 2.41 2.10 2.92 2.10 2.76 2.09 3.33 2.54 4.71 2.51 4.49 2.52 Proportion of Danish teachers 6.87 6.48 7.16 6.03 – − – − – − – − Proportion of math teachers – − – − 5.61 9.63 5.52 9.27 – − – − Proportion of science teachers – − – − – − – − 13.99 8.65 10.86 8.18 Student/teacher ratio in school 0.47 0.86 0.83 0.87 −0.85 0.98 −0.84 0.94 0.33 0.97 0.23 0.96 School size 0.10 0.04 0.11 0.04 0.10 0.04 0.11 0.04 0.10 0.04 0.11 0.04 School size squared −0.01 0.00 0.00 0.00 −0.00 0.00 0.00 0.00 −0.00 0.00 0.00 0.00 School uses standardised tests 18.71 8.60 16.30 7.62 15.18 9.00 13.43 8.84 12.12 10.14 10.15 9.81 School anatomy −2.14 3.37 −3.13 3.21 4.76 3.39 4.08 3.46 −3.21 4.08 −4.92 3.91 Teacher support 1.84 2.26 −0.35 2.51 2.65 3.35 Teacher behavior −1.59 2.09 0.98 2.68 1.65 2.84 Teacher morale 2.51 2.27 2.90 2.42 3.01 2.74 Student-teacher relations 9.21 1.78 7.74 2.19 8.24 2.74 Constant 354.11 16.08 304.64 27.27 282.10 19.72 443.25 19.19 409.60 27.76 395.01 22.40 358.89 21.01 306.94 33.94 298.74 24.17 Adjusted R-sq. 0.28 0.29 0.31 0.23 0.24 0.25 0.21 0.22 0.23 No. observations 4175 2350 2314
Mean school composition (SES) Variation school composition(SES)
Reading scores
Reading scores
Table A2 OLS results for reading, math and science scores
School composition effects 203
28.91 52.19 −71.02 7.07 5.87 0.70 −12.20 0.01 7.85 15.23 6.15 6.33 1.28 0.17 0.00 8.97 −2.89 9.43 −1.99 2.88 6.70 158.21
Female student Native student Attends grade 8 (modal grade = grade 9) Lives with both natural parents Highest parental education Highest parental occupation Parental wealth (index) Parental income (100,000DKK/year) Home educational resources Parental academic interest Cultural possessions Proportion of Danish teachers Student/ teacher ratio in school School size School size squared School uses standardised tests School autonomy Teacher support Teacher behavior Teacher morale Student–teacher relations Constant
Pseudo R-sq. No. observations
13.88 −0.04
0.20
5.31 11.47 12.20 5.70 1.27 0.19 4.13 0.01 3.07 2.98 2.95 10.50 1.34 0.05 0.00 12.06 5.71 3.25 3.24 3.59 2.89 33.73
8.52 10.19 19.84 48.69 −58.85 7.01 6.81 0.58 −9.53 0.01 5.01 14.61 5.56 6.78 −0.21 0.15 0.00 7.77 −1.26 4.14 −4.16 0.98 7.13 227.70
15.83 4.51
0.18
4.00 8.40 9.36 4.36 0.85 0.14 2.90 0.01 2.25 2.31 2.24 7.61 1.02 0.04 0.00 6.83 3.65 2.89 2.62 2.64 2.31 23.93
6.38 7.55
se
Coef.
Coef.
se
0.25 percentile
0.10 percentile
Mean school composition (SES) Variation school composition (SES)
Reading scores
Table A3 Quantile regression results for reading, math and science scores
17.85 38.75 −48.53 7.93 5.87 0.63 −5.92 0.01 1.05 16.16 4.95 8.02 0.88 0.05 0.00 5.95 −2.79 2.35 −0.91 2.45 8.82 309.58
9.47 2.91
Coef.
0.16 4175
2.96 7.56 8.04 3.69 0.80 0.12 2.14 0.01 1.99 1.82 1.88 5.91 0.73 0.03 0.00 4.76 2.96 1.96 2.11 1.89 1.78 18.65
4.39 7.65
se
0.50 percentile
12.24 38.69 −39.51 0.22 5.86 0.58 −6.53 0.02 5.32 16.12 3.18 5.09 0.14 0.08 0.00 13.18 −3.00 −0.60 0.63 3.20 12.41 359.70
9.25 6.81
Coef.
0.13
3.44 7.89 7.85 4.06 0.78 0.13 2.75 0.01 2.04 1.93 1.91 7.09 0.99 0.03 0.00 5.45 3.43 2.24 2.34 2.86 2.25 21.63
5.58 8.39
se
0.75 percentile
14.76 27.75 −32.43 6.15 5.57 0.73 −5.32 0.01 7.28 15.40 6.18 8.40 0.67 0.07 0.00 14.78 −5.80 −4.41 −1.97 0.70 9.51 403.87
8.88 4.91
Coef.
0.14
4.07 9.82 9.91 4.31 0.99 0.16 3.10 0.01 2.54 2.29 2.31 6.95 1.30 0.04 0.00 6.77 3.75 2.71 2.55 3.01 2.48 23.57
6.04 9.16
se
0.90 percentile
204 B. S. Rangvid
Dummy variables for missing values are included in the regressions
0.15
6.21 17.28 17.78 7.42 1.70 0.20 4.83 0.01 3.73 3.74 3.84 15.54 1.55 0.06 0.00 12.00 7.05 4.43 4.50 4.01 3.63 40.46
−9.47 43.37 −35.97 18.20 5.12 0.76 −1.95 0.00 5.49 10.96 3.17 16.29 0.01 0.14 0.00 18.70 6.48 7.26 4.56 4.01 4.33 299.72
Female student Native student Attends grade 8 (modal grade = grade 9) Lives with both natural parents Highest parental education Highest parental occupation Parental wealth (index) Parental income (100,000DKK/year) Home educational resources Parental academic interest Cultural possessions Proportion of math teachers Student/teacher ratio in school School size School size squared School uses standardised tests School autonomy Teacher support Teacher behavior Teacher morale Student–teacher relations Constant
Pseudo R-sq. No. observations
10.08 13.54
13.14 −14.43 −13.48 26.98 −39.15 20.77 4.44 0.52 −1.05 −0.02 6.65 8.87 3.81 7.80 −0.20 0.11 0.00 11.50 8.55 0.91 −1.23 3.32 4.87 321.07
15.65 1.04
0.13
4.50 11.94 9.63 5.62 1.18 0.17 3.37 0.01 2.96 2.73 2.97 11.33 1.31 0.05 0.00 8.91 4.35 3.30 2.67 2.77 2.53 30.41
7.64 13.68
se
Coef.
Coef.
se
0.25 percentile
0.10 percentile
Mean school composition (SES) Variation school composition (SES)
Math scores
Table A3 continued
−18.07 32.16 −52.48 0.23 3.48 0.64 −0.63 0.01 4.52 9.01 1.47 2.50 −1.16 0.09 0.00 9.59 2.21 −1.67 −2.02 2.07 9.99 405.07
12.78 −2.02
Coef
0.13 2350
4.27 8.05 7.99 5.32 1.05 0.16 3.09 0.01 2.67 2.53 2.61 9.97 1.09 0.05 0.00 7.09 4.15 3.26 2.82 2.96 3.02 24.44
6.90 8.18
se
0.50 percentile
−21.21 30.02 −63.57 1.30 3.16 0.38 −6.84 0.01 7.40 9.47 3.64 11.82 −1.63 0.09 0.00 10.29 0.27 −1.25 −0.49 1.62 8.69 484.92
17.00 −12.98
Coef.
0.12
4.47 11.01 11.43 5.56 1.12 0.18 2.97 0.01 2.69 2.71 2.66 10.35 1.14 0.05 0.00 7.76 4.05 2.96 2.80 3.20 2.64 24.93
6.29 10.16
se
0.75 percentile
−25.00 27.47 −53.15 −6.38 3.95 0.31 −2.65 0.01 3.87 13.00 3.37 4.47 −2.53 0.12 0.00 7.29 1.12 0.88 1.08 13.98 7.89 542.25
23.92 −15.29
Coef.
0.12
5.25 11.33 12.76 5.65 1.11 0.20 3.72 0.01 3.25 3.06 2.95 12.69 1.34 0.06 0.00 8.06 4.76 3.61 3.38 4.25 3.31 31.46
7.18 13.96
se
0.90 percentile
School composition effects 205
7.89 17.09 17.42 8.08 1.78 0.27 5.72 0.02 4.93 4.17 4.34 14.51 2.36 0.09 0.00 14.53 7.66 5.04 5.16 4.76 4.04 43.17
−21.45 43.13 −45.96 6.19 6.54 0.58 −1.85 −0.01 8.34 11.54 4.42 −2.28 2.27 0.13 0.00 4.61 7.37 7.80 6.10 2.16 −2.38 177.91 0.12
11.42 14.23
18.12 7.21
0.13
−13.00 45.38 −30.36 9.44 5.70 0.53 −0.71 0.00 2.24 15.46 5.83 2.05 0.36 0.08 0.00 5.76 −4.39 6.40 −1.26 3.22 2.06 258.85
13.58 3.59 4.89 13.04 15.79 6.10 1.16 0.20 3.80 0.01 2.89 2.93 2.97 9.88 1.18 0.05 0.00 9.32 4.36 3.00 3.54 3.23 2.65 32.00
7.86 9.96
se
Coef.
Coef.
se
0.25 percentile
0.10 percentile
Dummy variables for missing values are included in the regressions
Pseudo R-sq. No. observations
Female student Native student Attends grade 8 (modal grade = grade 9) Lives with both natural parents Highest parental education Highest parental occupation Parental wealth (index) Parental income (100,000DKK/year) Home educational resources Parental academic interest Cultural possessions Proportion of Science teachers Student/teacher ratio in school School size School size squared School uses standardised tests School autonomy Teacher support Teacher behavior Teacher morale Student–teacher relations Constant
Mean school composition (SES) Variation school composition (SES)
Science scores
Table A3 continued
0.13
−16.96 49.60 −29.73 10.13 6.81 0.52 −5.60 0.01 6.58 15.39 4.36 9.97 −0.25 0.12 0.00 9.13 −4.10 3.62 −3.01 0.75 9.28 290.87
11.35 −4.88
Coef.
2314
5.13 11.45 11.13 5.90 1.32 0.19 3.86 0.01 3.18 2.88 3.13 8.99 1.25 0.04 0.00 7.45 4.80 3.56 3.35 3.07 3.07 29.30
7.89 11.79
se
0.50 percentile
0.13
−20.42 55.09 −25.51 −0.59 6.10 0.46 −7.29 0.02 11.90 17.33 3.33 21.23 −0.56 0.07 0.00 11.41 −6.33 1.48 1.46 1.59 12.03 359.48
11.09 8.73
Coef.
5.52 13.18 10.53 6.58 1.21 0.20 3.69 0.01 3.44 3.20 3.30 9.09 1.31 0.05 0.00 10.27 4.50 4.40 3.29 3.29 3.29 29.51
8.05 12.69
se
0.75 percentile
0.13
−25.28 19.89 −25.90 1.71 7.29 0.74 −4.56 0.01 4.74 20.25 −0.15 35.66 −1.78 0.07 0.00 6.14 −7.15 −0.43 2.24 3.56 9.81 424.48
10.17 3.81
Coef.
5.59 16.83 14.29 6.14 1.27 0.24 4.61 0.02 3.72 3.27 3.33 11.26 1.58 0.06 0.00 9.52 5.66 3.59 3.55 3.76 3.41 37.33
9.14 13.77
se
0.90 percentile
206 B. S. Rangvid
207
School composition effects
Table A4 Confidence bounds for quantile regressions results with PCA included in the boosstrap 0.10
0.25
0.50
0.75
0.90
READ Mean school composition (SES) Lower bound 95% Upper bound 95%
17.33 0.92 35.71
21.77 10.00 34.89
13.15 4.45 22.87
15.31 4.22 27.71
9.88 −2.22 23.17
MATH Mean school composition (SES) Lower bound 95% Upper bound 95%
18.83 −1.85 41.78
18.74 5.33 33.74
17.04 4.04 31.56
17.35 4.41 31.83
27.97 13.37 44.26
SCIE Mean school composition (SES) Lower bound 95% Upper bound 95%
18.83 −4.43 43.42
16.60 2.29 32.61
15.12 1.01 31.04
11.05 −5.75 29.19
8.9 −6.51 25.38
References Ammermüller A, Pischke J-S (2006) Peer effects in European primary schools: evidence from PIRLS. IZA discussion paper 2077 Andersen AM, Egelund N, Jensen TP, Krone M, Lindenskov L, Mejding J (2001) Forventninger og frdigheder - danske unge i en international sammenligning, (in Danish with an English summary). akf, DPU, SFI. Copenhagen, Denmark Bedard K (2003) School quality and the distribution of male earnings in Canada. Econ Educ Rev 22:395–407 Betts J, Morell D (1999) The determinants of undergraduate grade point average—the relative importance of family background, high school resources, and Peer Group Effects. J Hum Resour 34(2):268–293 Betts J, Zau A (2004) Peer Groups and academic achievement: panel evidence from administrative data. Public Policy Institute of California. February 2004 Blau PM (1960) Structural effects. Am Sociol Rev 25:178–193 Boozer MA, Cacciola SE (2001) Inside the “Black Box” of project STAR: estimation of peer effects using experimental data. Discussion Paper No. 832, Economic Growth Center, Yale University Buchinsky M (1994) Changes in the U.S. wage structure 1963–1987: application of quantile regression. Econometrica 62(2):405–458 Buchinsky M (1998) Recent advances in quantile regression models: a practical guideline for empirical research. J Hum Resour 33(1):88–126 Dreeben R, Barr R (1988) Classroom composition and the design of instruction. Sociol Educ 61:129–142 Driessen G (2002) School composition and achievement in primary education: a large-scale multilevel approach. Stud Educ Eval 28:347–68 Eide E, Showalter M (1998) The effect of school quality on student performance: a quantile regression approach. Econ Lett 58:345–350 Falk A, Ichino A (2006) Clean evidence on peer effects. J Lab Econ 24(1):39–58 Feinstein L, Symons J (1999) Attainment in secondary school. Oxford Economic Papers 51, 300-321 Fitzenberger B, Koenker R, Machado JAF (eds.) (2001) Empir Econ 26(1) Glavind N (2004) Polarisering på boligmarkedet. (Polarization in the housing market.) Arbejderbevgelsens Erhvervsråd, Copenhagen Gould ED, Lavy V, Paserman MD (2004) Does immigration affect the long-term educational outcomes of natives? Quasi-experimental evidence. NBER Working Papers 10844, National Bureau of Economic Research Goux D, Maurin E (2006) Close neighbours matter: neighbourhood effects on early performance at school. IZA Discussion Paper 2095
208
B. S. Rangvid
Green DA, Riddell C (2003) Literacy and earnings: an investigation of the interaction of cognitive and unobserved skills in earnings generation. Labour Econ 10:165–184 Hanushek EA, Kain, JF, Markman, Jacob M, Rivkin SG (2003) Does peer ability affect student achievement? J Appl Econ 18(5):527–544 Hanushek EA, Kain JF, Rivkin SG (2004) Why public schools lose teachers. J Hum Resour 39(2):326–356 Hoxby C (2000) Peer effects in the classroom: learning from gender and race variation. NBER WP 7867 Hummelgaard H, Husted L, Holm A, Baadsgaard M, Olrik B (1995) Etniske minoriteter, integration og mobilitet (Ethnic minorities, integration and mobility.) In Danish. akf Forlaget Hummelgaard H, Husted L (2001) Social og etnisk bestemt bostning - årsager og konsekvenser (Socially and ethnically determined housing.) In Danish. akf Forlaget Jencks C, Mayer SE (1990) The social consequences of growing up in a poor neighborhood. In: Lynn LE, McGeary MGH (eds) Inner-city poverty in the United States. National Academy Press, Washington Jolliffe IT (1986) Principal component analysis. Springer, Berlin Levin J (2001) For whom the reductions count: a quantile regression analysis of class size and peer effects on scholastic achievement. Empir Econ 26:221–246 Mahard RE, Crain RL (1983) Research on minority achievement in desegregated schools. In: Rossell CH, Howley WD (eds) The consequences of school segregation. Temple University Press, Philadelphia, pp 103–127 Manski CF (1993) Identification of endogenous social effects: the reflection problem. Rev Econ Stud 60(3):531—542 Manski CF (2000) Economic analysis of social interactions. J Econ Perspect 14(3):115—136 McEwan PJ (2003) Peer effects on student achievement: evidence from Chile. Econ Educ Rev 22:131–141 OECD (2001) Knowledge and skills for life—first results from PISA 2000. OECD, Paris OECD (2002a) PISA 2000 technical report. OECD, Paris OECD (2002b) Manual for the PISA 2000 database. OECD, Paris OECD (2003) Literacy skills for the world of tomorrow. Further Results from PISA 2000. OECD, Paris Rivkin SG, Hanushek EA, Kain JF (2005) Teachers, schools, and academic achievement. Econometrica 73:417–458 Robertson D, Symons J (2003) Do peer groups matter? Peer groups versus schooling effects on academic achievement. Econ 70:31–53 Sacerdote B (2001) Peer effects with random assignment: results for Dartmouth Roommates. Q J Econ 116(2):681–704 Schneeweis N, Winter-Ebmer R (2007) Peer effects in Austrian Schools. Empir Econ (forthcoming) Shavit Y, Williams RA (1985) Ability grouping and contextual determinants of educational expectations in Israel. Am Sociol Rev 50:62–73 Söderström M, Uusitalo R (2005) School choice and segregation: evidence from an admission reform. IFAU Working Paper Nr. 7/2005. Uppsala, Sweden Thrupp M, Lauder H, Robinson T (2002) School composition and peer effects. Int J Educ Res 37:483–504 Webb NM (1991) Task related verbal interaction and mathematics learning in small groups. J Res Math Educ 22:366–389 Willms JD (1986) Social class segregation and its relationship to pupils’ examination results in Scotland. Am Sociol Rev 51(2):224–241 Wilson A (1959) Residential segregation of social classes and aspirations of high school boys. Am Sociol Rev 24:836–845 Woessmann J (2004) How equal are educational opportunities? family background and student achievement in Europe and the United States. IZA Working Paper No. 1284. Bonn Zimmer RW, Toma EF (2000) Peer effects in private and public schools across countries. J Policy Anal Manage 19(1):75–92 Zimmerman DJ (2003) Peer effects in academic outcomes: evidence from a natural experiment. Rev Econ Stat 85:9–23
What accounts for international differences in student performance? A re-examination using PISA data Thomas Fuchs · Ludger Wößmann
Revised: 17 July 2006 / Published online: 22 August 2006 © Springer-Verlag 2006
Abstract We use the PISA student-level achievement database to estimate international education production functions. Student characteristics, family backgrounds, home inputs, resources, teachers and institutions are all significantly associated with math, science and reading achievement. Our models account for more than 85% of the between-country performance variation, with roughly 25% accruing to institutional variation. Student performance is higher with external exams and budget formulation, but also with school autonomy in textbook choice, hiring teachers and within-school budget allocations. Autonomy is more positively associated with performance in systems that have external exit exams. Students perform better in privately operated schools, but private funding is not decisive. Keywords Education production function · PISA · International variation in student performance · Institutional effects in schooling JEL classification I28 · J24 · H52 · L33
T. Fuchs (B) Ifo Institute for Economic Research, Poschingerstr. 5, 81679 Munich, Germany e-mail:
[email protected] URL: www.cesifo.de/link/fuchs_t.htm L. Wößmann Ifo Institute for Economic Research, University of Munich and CESifo, Poschingerstr. 5, 81679 Munich, Germany e-mail:
[email protected] URL: www.cesifo.de/link/woessmann_l.htm
210
T. Fuchs, L. Wößmann
1 Introduction The results of the Programme for International Student Assessment (PISA), conducted in 2000 by the Organisation for Economic Co-operation and Development (OECD), triggered a vigorous public debate on the quality of education systems in most participating countries. PISA made headlines on the front pages of tabloids and more serious newspapers alike. For example, The Times (December 6, 2001) in England titled, “Are we not such dunces after all?”, and Le Monde (December 5, 2001) in France titled, “France, the mediocre student of the OECD class”. In Germany, the PISA results hit the front pages of all leading newspapers for several weeks (e.g., “Abysmal marks for German students” in the Frankfurter Allgemeine Zeitung, December 4, 2001), putting education policy at the forefront of attention ever since. “PISA” is now a catch-phrase, known by every German, for the poor state of the German education system. While this coverage proves the immense public interest, the quality of much of the underlying analysis is less clear. Often, public assessments tend to simply repeat long-held believes, rather than being based on evidence produced by the PISA study. If based on PISA facts, they usually rest on bilateral comparisons between two countries, e.g., comparing a commentator’s home country to the top performer (Finland in the case of PISA reading literacy). And more often than not, they are bivariate, presenting the simple correlation between student performance and a single potential determinant, such as educational spending. Economic theory suggests that one important set of determinants of educational performance are the institutions of the education system, because they set the incentives for the actors in the education process. Among the institutions that have been theorized to impact on the quality of education are public versus private financing and provision (e.g., Epple and Romano 1998; Nechyba 2000), centralization of financing (e.g., Hoxby 1999, 2001; Nechyba 2003), external versus teacher-based standards and examinations (e.g., Costrell 1994; Betts 1998; Bishop and Wößmann 2004), centralization versus school autonomy in curricular, budgetary and personnel decisions (e.g., Bishop and Wößmann 2004) and performance-based incentive contracts (e.g., Hanushek et al. 1994). In many countries, the impact that such institutions may have on student performance tends to be ignored in most discussions of education policy, which often focus on the implicitly assumed positive link between schooling resources and learning outcomes. One reason for this neglect may be that the lack of institutional variation within most education systems makes an empirical identification of the impact of institutions impossible when using national datasets, as is standard practice in most empirical research on educational production (cf., Hanushek 2002 and the references therein). However, such institutional variation is given in cross-country data, and evidence based on previous international student achievement tests such as IAEP (Bishop 1997), TIMSS (Bishop 1997; Wößmann 2003a) and TIMSS-Repeat (Wößmann 2003b) supports the view that institutions play a key role in determining student outcomes. These international databases allow for multi-country multivariate analyses, which ensure that the impact of each
International differences in student performance
211
determinant is estimated for observationally similar schools by holding the effects of other determinants constant. In this paper, we use the PISA database to test the robustness of the findings of these previous studies of international education production functions.1 Combining the performance data with background information from student and school questionnaires, we estimate the association between student background, schooling resources and schooling institutions on the one hand and international variations in students’ educational performance on the other hand. In contrast to Bishop’s (1997, 2006) country-level analyses, we perform the analyses at the level of the individual student, which allows us to take advantage of within-country variation in addition to between-country variation, vastly increasing the degrees of freedom of the analyses, at least to the extent that there is independence of error terms within countries. Given its particular features, the rich PISA student-level database allows for a rigorous assessment of the determinants of international differences in student performance in general, and of the link between schooling institutions and student performance in particular. PISA offers the possibility to re-examine the validity of results of previous international studies in the context of a different subject (reading in addition to math and science), a different definition of required capabilities and of the target population, and to extend the examination by including more detailed family-background and institutional data. Among others, the PISA database distinguishes itself from previous international tests by providing data on parental occupation at the level of the individual student and on private versus public operation and funding at the level of the individual school. Our results show that the PISA evidence underscores the importance of institutional features for international differences in student performance. Most notably, there are important interaction effects between external exit exams and several measures of school autonomy, with the association between school autonomy and student performance becoming more positive in school systems with external exit exams. The remainder of the paper is structured as follows. Section 2 describes the database of the PISA international student performance study and compares its features to previous studies. Section 3 discusses the econometric model. Section 4 presents the empirical results. Section 5 reports a set of robustness
1 Some economic research based on PISA data exists, but it is mostly on a national scale. Fertig (2003a) uses the German PISA sample to analyze determinants of German students’ achievement. Fertig (2003b) uses the US PISA sample to analyze class composition and peer group effects. Wolter and Coradg Vellacott (2003) use PISA data to study sibling rivalry in Belgium, Canada, Finland, France, Germany and Switzerland. To our knowledge, the only previous study using PISA data to estimate multivariate education production functions internationally is Fertig and Schmidt (2002), who, sticking to reading performance, do not focus on estimating determinants of international performance variation but rather on estimating conditional national performance scores. Recently, Fuchs and Wößmann (2004b) analyze the association between computers and PISA performance in detail.
212
T. Fuchs, L. Wößmann
checks. Section 6 analyzes the explanatory power of the model and its different parts at the country level. Section 7 summarizes and concludes.
2 The PISA international student performance study 2.1 PISA and previous international student achievement tests The international dataset used is the OECD Programme for International Student Assessment (PISA). In addition to testing the robustness of findings derived from previous international student achievement tests, the PISA-based analyses contribute several additional new aspects to the literature. First, PISA tested a new subject, namely reading literacy, in addition to math and science already tested in IAEP and TIMSS. This alternative measure of performance broadens the outcome of the education process considered in the analyses. Second, particularly in reading, but also in the more traditional domains of math and science, “PISA aims to define each domain not merely in terms of mastery of the school curriculum, but in terms of important knowledge and skills needed in adult life” (OECD 2000, p. 8). That is, rather than being curriculum-based as the previous studies, “PISA looked at young people’s ability to use their knowledge and skills in order to meet real-life challenges” (OECD 2001, p. 16). For example, reading literacy is defined in terms of “the capacity to understand, use and reflect on written texts, in order to achieve one’s goals, to develop one’s knowledge and potential, and to participate in society” (OECD 2000, p. 10).2 There is a similar real-life focus in the other two subjects. On the one hand, this focus should constitute the most important outcome of the education process, but on the other hand it bears the caveat that schools are assessed not on the basis of what their school system requires them to teach, but rather on what students might need particularly well for coping with everyday life. Third, rather than targeting students in specific grades as in previous studies, PISA’s target population are 15-year-old students in each country, regardless of the grade they currently attend. This target population not only assesses young people near the end of compulsory schooling, but also captures students of the very same age in each country independent of the structure of national school systems. By contrast, the somewhat artificial grade-related focus of other studies may be distorted by differing entry ages and grade-repetition rules. Fourth, the PISA data provide more detailed information than previous studies on some institutional features of the school systems. For example, PISA provides data on whether schools are publicly or privately operated, on which share of their funding stems from public or private sources and on whether schools can fire their teachers. These background data provide improved internationally comparable measures of schooling institutions. 2 See OECD (2000) for further details on the PISA literacy measures, as well as for sample questions.
International differences in student performance
213
Fifth, the PISA data also provide more detailed information on students’ family background. For instance, there is information on parental occupation and the availability of computers at home. This should contribute to a more robust assessment of potential determinants of student performance. Finally, reading literacy is likely to depend more strongly on family-background variables than performance in math and science. Hence controlling for a rich set of family-background variables should establish a more robust test of the institutions-performance link when reading performance is the dependent variable. Taken together, the PISA international dataset allows for a re-examination of results based on previous international tests using an additional subject, reallife rather than curriculum-based capabilities, an age-based target population and richer data particularly on family background and institutional features of the school system. 2.2 The PISA database The PISA study was conducted in 2000 in 32 developed and emerging countries, 28 of which are OECD countries, in order to obtain an internationally comparable database on the educational achievement of 15-year-old students in reading, math and science. The study was organized and conducted by the OECD, ensuring as much comparability among participants as possible and a consistent and coherent study design.3 Table 1 reports the countries participating in the PISA 2000 study.4 As described above, PISA’s target population were the 15-year-old students in each country. More specifically, PISA sampled students aged from 15 years and 3 months to 16 years and 2 months at the beginning of the assessment period. The students had to be enrolled in an educational institution, regardless of the grade level or type of institution. The average age of OECD-country students participating in PISA was 15 years and 8 months, varying by a maximum of only 2 months among the participating countries. The PISA sampling procedure ensured that a representative sample of the target population was tested in each country. Most PISA countries employed a two-stage sampling technique. The first stage drew a (usually stratified) random sample of schools in which 15-year-old students were enrolled, yielding a minimum sample of 150 schools per country. The second stage randomly sampled 35 of the 15-year-old students in each of these schools, with each 15-year-old student in a school having equal selection probability. This sampling procedure typically led to a sample of between 4,500 and 10,000 tested students in each country.
3 For detailed information on the PISA study and its database, see OECD (2000, 2001, 2002),
Adams and Wu (2002) and the PISA homepage at http://www.pisa.oecd.org. 4 Liechtenstein was not included in our analysis due to lack of internationally comparable countrylevel data, e.g. on educational expenditure per student. Note that there were only 326 15-year-old students in Liechtenstein in total, 314 of whom participated in PISA.
Australia Austria Belgium Brazil Canada Czech Republic Denmark Finland France Germany Greece Hungary Iceland Ireland Italy Japan Korea Latvia Luxembourg Mexico Netherlands New Zealand Norway Poland
ISEI
External Exit exama
523.7 517.2 495.9 386.0 526.3 509.8 483.7 533.4 499.7 488.5 465.3 495.2 495.9 512.6 479.6 543.6 546.1 463.9 450.6 431.0 527.4 524.3 500.0 484.9
530.1 513.0 516.4 362.8 528.3 495.7
513.1 532.3 514.5 488.0 453.6 486.5 513.0 501.2 460.2 553.0 542.0 469.4 452.2 400.8 556.0 532.9 498.7 475.6
495.9 544.1 506.1 487.8 474.6 484.3 505.6 524.3 489.2 521.2 523.0 460.7 447.5 422.7 530.1 527.6 502.2 481.1
526.9 508.0 509.8 403.4 532.4 494.9 44.2 44.5 43.7 44.0 43.4 42.1 46.5 42.3 43.5 43.8 41.4 40.8 42.3 39.8 46.6 45.1 48.5 40.1
45.5 45.6 44.8 39.9 45.5 42.7 1.0 1.0 1.0 0.4 0.0 1.0 1.0 1.0 1.0 1.0 1.0 0.5 1.0 0.0 1.0 1.0 1.0 1.0
0.8 0.0 0.0 0.0 0.5 1.0 97.0 99.3 60.2 62.8 97.3 99.6 88.4 49.1 93.1 99.3 99.4 75.9 0.0 61.4 97.5 99.1 NA NA
92.1 70.0 72.1 93.7 59.0 88.7 97.6 61.4 21.9 25.8 85.4 100.0 100.0 88.6 10.3 34.5 32.3 100.0 0.0 60.5 100.0 100.0 NA NA
64.2 42.0 97.7 42.4 86.6 97.5
Establish. teachers’ starting salaries
Determining course content
Reading
Math
Science
School autonomy in
Test score
Table 1 Descriptive country statistics of selected variables
100.0 100.0 100.0 100.0 96.3 100.0 99.3 100.0 100.0 99.3 99.4 70.4 0.0 83.2 100.0 100.0 NA NA
100.0 100.0 99.6 99.8 93.1 100.0
Choosing textbooks
75.3 97.2 78.4 95.6 92.8 94.7 99.2 39.1 94.2 69.8 47.7 99.3 87.9 84.6 26.3 95.4 98.5 97.1
NA 87.3 26.0 86.8 NA 93.9
Publicly managed school
94.2 99.8 75.5 97.3 83.7 87.3 99.4 91.1 75.2 72.5 49.1 95.6 100.0 37.2 94.7 80.2 99.5 92.3
69.5 90.4 87.5 78.0 90.0 94.9
Governm. funding
214 T. Fuchs, L. Wößmann
ISEI
External Exit exama
494.4
500.4
494.5
496.1
463.0 463.8 492.1 510.3 496.2 528.3
457.7 479.3 477.1 509.2 526.4 526.6
495.3
503.2
472.2 463.3 493.1 514.5 495.4 522.1
43.7
46.1
41.0 41.3 42.4 45.2 45.3 46.1
0.6
0.1
0.0 1.0 0.0 0.5 0.0 1.0
78.0
86.3
26.5 94.6 89.4 95.1 52.9 96.4
68.5
98.9
21.1 99.6 37.7 100.0 95.7 99.7
Establish. teachers’ starting salaries
Determining course content
Reading
Math
Science
School autonomy in
Test score
93.5
95.3
100.0 97.4 99.6 100.0 75.1 100.0
Choosing textbooks
82.9
93.4
92.7 100.0 60.7 96.6 93.6 91.4
Publicly managed school
86.7
91.6
87.9 93.5 82.9 99.9 93.8 89.8
Governm. funding
Notes: Country means, based on non-imputed data for each variable, weighted by sampling probabilities. ISEI international socio-economic index. The institutional variables are reported as shares within each country (in percent). NA not available. a In math.
Total Average
Portugal Russian Fed. Spain Sweden Switzerland United Kingdom United States
Table 1 continued
International differences in student performance 215
216
T. Fuchs, L. Wößmann
Table 2 Variance decomposition
Variance between students Variance within schools Variance between schools Variance between countries
Math
Science
Reading
100.0 56.2 43.8 16.1
100.0 63.0 37.0 10.7
100.0 59.7 40.3 9.6
Note: Share of total variance in test scores occurring between (resp. within) the respective group (in percent)
The performance tests were paper and pencil tests, lasting a total of two hours for each student. Test items included both multiple-choice items and questions requiring the students to construct their own responses. The PISA tests were constructed to test a range of relevant skills and competencies that reflected how well young adults are prepared to analyze, reason and communicate their ideas effectively. Each subject was tested using a broad sample of tasks with differing levels of difficulty to represent a coherent and comprehensive indicator of the continuum of students’ abilities. Using item response theory, PISA mapped performance in each subject on a scale with an international mean of 500 and a standard deviation of 100 test-score points across the OECD countries. The main focus of the PISA 2000 study was on reading, with two-thirds of the testing time devoted to this subject. In the other two subjects, smaller samples of students were tested. The correlation of student performance between the subjects is substantial, at 0.700 between reading and math (96,913 joint observations), 0.718 between reading and science (96,815) and 0.639 between math and science (39,079). To give an idea of the international structure of the data, Table 1 reports country means of test-score performance in the three subjects. Mean performance on the reading literacy test ranges from 403.4 test-score points in Brazil to 544.1 in Finland. Table 2 reports results of a decomposition of the total international variance of student-level test scores into components within schools, between schools and between countries. In each subject, more than half of the total variance occurs within individual schools. The variance component that occurs between countries ranges from 9.6% in reading to 16.1% in math. In addition to the performance tests, students as well as school principals answered respective background questionnaires, yielding rich background information on students’ personal characteristics and family backgrounds as well as on schools’ resource endowments and institutional settings. Combining the available data, we constructed a dataset containing 174,227 students in 31 countries tested in reading literacy. In math, the sample size is 96,855 students, and 96,758 students in science. In our estimations, we drop students in extremely outlying grades, namely grades 6 or lower and grades 12 or higher, which reduces the samples by between 342 and 609 students in the three subjects. The dataset combines student test scores in reading, math and science with students’ characteristics, family-background data and school-related variables
International differences in student performance
217
of resource use and institutional settings.5 For estimation purposes, a variety of qualitative variables were transformed into dummy variables. Table 1 contains country means of selected country characteristics. Table 3 reports international descriptive statistics for all the variables employed in this paper. It also includes information on the amount of original versus missing data for each variable. To be able to use a complete dataset of all students with performance data and at least some background data, we imputed missing values using the method described in the Appendix. Given the large set of explanatory variables considered and given that each variable is missing for some students, dropping all student observations that have a missing value on at least one variable would mean a severe reduction in sample size. While the percentage of missing values of each individual variable ranges from 0.9 to 33.3% (cf. Table 3), the percentage of students with a missing value on least one variable is 72.6% in reading. That is, the sample size in reading would be as small as 47,782 students from 20 countries (26,228 students from 19 countries in math and 24,049 students from 19 countries in science). Apart from the general reduction in sample size, dropping all students with a missing value on at least one variable would delete information available on other explanatory variables for these students and introduce bias if values are not missing at random. Thus, data imputation is the only viable way of performing the broad-based analyses. As described in Sect. 3.1 below, the estimations we employ ensure that the effects estimated for each variable are not driven by imputed values. In addition to the rich PISA data at the student and school level, we also use country-level data on countries’ GDP per capita in 2000 (measured in purchasing power parities (PPP), World Bank 2003), average educational expenditure per student in secondary education in 2000 (measured in PPP, OECD 2003)6 and the existence of curriculum-based external exit exams (in their majority kindly provided by John Bishop; cf. Bishop 2006). 3 Econometric analysis of the education production function 3.1 Estimation equation, covariance structure and sampling weights The microeconometric estimation equation of the education production function has the following form:
5 We do not use data on teaching methods, teaching climate or teacher motivation as explanatory variables, because we view these mainly as outcomes of the education system. First, such measures are endogenous to the institutional surrounding of the education system. This institutional surrounding sets the incentives to use specific methods and creates a specific climate, thereby constituting the deeper cause of such factors. Second, such measures may be as much the outcome of students’ performance as their cause, so that they would constitute left-hand-side rather than right-hand-side variables. 6 For the three countries with missing data in OECD (2003), we use comparable data for these countries from World Bank (2003) and data from both sources for countries where both are available to predict the missing data for the three countries by ordinary least squares.
218
T. Fuchs, L. Wößmann
Table 3 International descriptive statistics
Test scores Math Science Reading Institutions Testing External exit exam Standardized tests School autonomy Determining course content Establishing teachers’ starting salaries Choosing textbooks Deciding on budget allocations within school Formulating school budget Hiring teachers Public vs. private operation and funding Publicly managed school Government funding (share) Resources and teachers Educational expenditure per student (1,000$) Class size Math Science Reading Student-teacher ratio Instructional material Not at all lacking Strongly lacking Instruction time (1,000 minutes per year) Math Science Reading Teacher education (share at school) Masters in pedagogy Teacher certificate Masters in math Masters in science Masters in language Student characteristics Grade 6th or lower 7th 8th 9th 10th 11th 12th or higher Country’s school entry age Age (months) Female
Mean
SD
Source
Imputed
496.1 494.3 495.4
102.6 102.1 101.3
St St St
0.0 0.0 0.0
0.578 0.602
C Sc
0.0 12.2
0.780 0.265 0.935 0.946 0.762 0.685
Sc Sc Sc Sc Sc Sc
9.6 8.0 9.1 8.9 9.3 7.9
0.829 0.867
0.237
Sc Sc
22.1 5.9
5.664
2.627
C
0.0
8.3 9.1 8.4 6.7
St St St Sc
8.7 11.2 5.2 24.6
Sc Sc
3.8 3.8
23.6 22.8 24.6 13.7 0.502 0.049 7.346 7.473 7.558
2.921 4.319 3.183
Sc Sc Sc
5.0 5.0 5.0
0.616 0.844 0.734 0.789 0.774
0.392 0.266 0.338 0.319 0.315
Sc Sc Sc Sc Sc
29.9 32.7 31.4 32.6 33.3
St St St St St St St C St St
1.4 1.4 1.4 1.4 1.4 1.4 1.4 0.0 1.0 0.9
0.001 0.012 0.059 0.388 0.470 0.069 0.002 6.163 188.5 0.501
0.572 3.4
219
International differences in student performance Table 3 continued Mean Family background Born in country Student Mother Father Living with No parent Single father Single mother Both parents Parents’ education None Primary Lower secondary Upper secondary 1 Upper secondary 2 University Parents’ work status None working At least one half-time At least one full-time Both full-time Parents’ job Blue collar White collar Books at home None 1–10 books 11–50 books 51–100 books 101–250 books 251–500 books More than 500 books International socio-economic index (ISEI) School’s community location Village or rural area (<3,000) Small town (3,000–15,000) Town (15,000–100,000) City (100,000–1,000,000) City center of city with > 1 million people Elsewhere in city with > 1 million people GDP per capita (1,000) Home incentives and inputs Parental support Strongly lacking Not at all lacking Homework Math: <1 hour per week Math: >1 and <3 hours per week Math: >3 hours per week Science: <1 hour per week Science: >1 and <3 hours per week Science: >3 hours per week
SD
Source
Imputed
0.927 0.864 0.863
St St St
4.5 4.7 5.6
0.011 0.021 0.132 0.836
St St St St
1.8 1.8 1.8 1.8
0.011 0.075 0.137 0.149 0.245 0.383
St St St St St St
6.8 6.8 6.8 6.8 6.8 6.8
0.066 0.065 0.492 0.378
St St St St
1.9 1.9 1.9 1.9
0.098 0.522
St St
4.2 4.2
0.018 0.090 0.199 0.210 0.212 0.155 0.117 43.807
16.712
St St St St St St St St
2.8 2.8 2.8 2.8 2.8 2.8 2.8 6.5
9.504
Sc Sc Sc Sc Sc Sc C
21.8 21.8 21.8 21.8 21.8 21.8 0.0
0.189 0.065
Sc Sc
3.4 3.4
0.418 0.401 0.182 0.500 0.337 0.163
St St St St St St
2.9 2.9 2.9 4.6 4.6 4.6
0.111 0.236 0.314 0.209 0.063 0.066 22.050
220
T. Fuchs, L. Wößmann
Table 3 continued Mean Reading: <1 hour per week Reading: >1 and <3 hours per week Reading: >3 hours per week Computers at home None One More than one
SD
Source
Imputed
0.478 0.397 0.126
St St St
2.6 2.6 2.6
0.221 0.250 0.529
St St St
2.7 2.7 2.7
Notes: Mean: International mean, based on non-imputed data for each variable, weighted by sampling probabilities. SD: International standard deviation (only for metric discrete variables). Source: Data source and thus level of observation: St = student achievement test or student background questionnaire; Sc = school background questionnaire; C = country-level variable (see text for specific sources). Imputed: Fraction of students with missing and thus imputed data (in percent), weighted by sampling probabilities
B R Tis = Bis β1 + Ris β2 + Is β3 + DB β + D B is 4 is is β5 + Dis β6 I I + DR is Ris β7 + Ds β8 + Ds Is β9 + εis ,
(1)
where Tis is the achievement test score of student i in school s. B is a vector of student background data (including student characteristics, family background and home inputs), R is a vector of data on schools’ resource endowment and I is a vector of institutional characteristics. Because of a particular interest in interactions between external exit exams and other institutional features, I will include institutional interaction terms. The parameter vectors β{1} to β{9} will be estimated in the regression. Note that with the exception of the institutional interaction terms, this specification of the international education production function restricts each effect to be the same in all countries, as well as at all levels (within schools, between schools and between countries). While it might be interesting to analyze the potential heterogeneity of certain effects between countries and between levels, regarding the object of interest of this paper it seems warranted to abstain from this effect heterogeneity and estimate a single effect for each variable.7 As discussed in the previous section, some of the data are imputed rather than original. Our imputation method is based on conditional mean imputation (cf. Little and Rubin 1987), which predicts the conditional mean for each missing 7 Wößmann (2003a) compares this restricted specification to an alternative two-step specification, discussing advantages and drawbacks particularly in light of potential omitted country-level variables, and favoring the specification employed here. The first, student-level step of the alternative specification includes country fixed effects in the estimation of (1). These country fixed effects are then regressed in a second, country-level step on averages at the country level of relevant explanatory variables. Wößmann (2003a) finds that the substantive results of the two specifications are virtually the same. Furthermore, our results presented in Sect. 6 below show that our restricted model can account for more than 85% of the between-country variation in test scores in each subject. Therefore, the scope for obvious unobserved country-specific heterogeneity, and thus the need for the country-fixed-effects specification, seems small.
International differences in student performance
221
observation on the independent variables using non-missing values of the specific variables and a set of independent variables observed for all students (see Appendix). Schafer and Schenker (2000) show that conditional mean imputation combined with appropriately corrected standard errors yields an unbiased and efficient estimator which outperforms the multiple stochastic imputation estimator (Rubin 1987). Because imputed values are estimated quantities, the statistical inference has to take account of the uncertainty involved in imputation. The required correction procedure for the standard errors employed in this paper accounts for the degree of variability and uncertainty in the imputation process and for the share of missing data (cf. the discussion of the Schafer and Schenker 2000 procedure in the Appendix). If values are not missing conditionally at random, estimates could still be biased. For example, if among observationally similar students the probability of a missing value for a variable depends on an unobserved student characteristic that also influences achievement, imputation would predict the same value of the variable for students with a missing value that was observed for the other students, which would result in biased coefficient estimates. To account for this possibility of non-randomly missing observations and to make sure that the results are not driven by imputed data, three vectors of dummy variables DB , DR and DI are included as controls in the estimation. The D vectors contain one dummy for each variable in the three vectors B, R and I that takes the value of 1 for observations with missing and thus imputed data and 0 for observations with original data. The D vectors allow the observations with missing data on each variable to have their own intercepts. The interaction terms between imputation dummies and data vectors, DB B, DR R and DI I, allow them to also have their own slopes. These imputation controls for each variable with missing values ensure that the results are reasonably robust against possible bias arising from data imputation. Owing to the complex data structure produced by the PISA survey design and the multi-level nature of the explanatory variables, the error term ε of the regression has a non-trivial structure. Given the possible dependence of students within the same school, the use of school-level variables and the fact that schools were the primary sampling unit (PSU) in PISA (see Sect. 2.2), there may be unobservable correlation among the individual error terms εi at the school level s (cf. Moulton 1986). Therefore, we report standard errors that are clustered at the school level, lifting the classical independence assumption to the level of schools. Finally, PISA used a stratified sampling design within each country, producing varying sampling probabilities for different students. To obtain nationally representative estimates from the stratified survey data at the within-country level, we employ weighted least squares (WLS) estimation using sampling probabilities as weights. WLS estimation ensures that the proportional contribution to the parameter estimates of each stratum in the sample is the same as would have been obtained in a complete census enumeration (DuMouchel and Duncan 1983; Wooldridge 2001). Furthermore, at the between-country level, we weight each of the 31 countries equally.
222
T. Fuchs, L. Wößmann
3.2 Cross-sectional data and potential endogeneity The econometric estimation of the PISA dataset is restricted by its crosssectional nature, which does not allow for panel or value-added estimations (cf., e.g., Hanushek 2002; Todd and Wolpin 2003). Because of unobserved student abilities, cross-sectional analyses can give rise to omitted variable bias when the variables of interest are correlated with the unobserved characteristics. In this paper, we hope to minimize such biases due to unobserved student heterogeneity by including a huge set of observed abilities, characteristics and institutions which reduce potential biases. Estimates based on cross-sectional data will be unbiased under the conditions that the explanatory variables of interest are unrelated to features that still remain unobserved, that they are exogenous to the dependent variable and that they and their impact on the dependent variable do not vary over time. It seems straightforward that the student-specific family background Bis is exogenous, in a narrow sense, to the students’ educational performance. Furthermore, most aspects of the family background Bis are time-invariant, so that the characteristics observed at the given point in time of the PISA survey should be consistent indicators for family characteristics in the past. Therefore, studentrelated family background, as well as other student characteristics like area of residence, affect not only the educational value-added in the year of testing but rather educational performance throughout a student’s entire school life. A level-estimation approach thus seems well-suited for determining the total association between family background and student-related characteristics on the one hand and students’ achievements on the other hand. However, family background may be correlated with unobserved ability, which again may be correlated across generations. Therefore, a narrow causal interpretation of family-background effects is not possible. In the presented analyses, the large amount of family-background indicators mainly serves as control variables. Many of the institutional features Is of an education system may also be reasonably assumed to be exogenous to individual students’ performance. The cross-country nature of the data allows the systematic utilization of country differences in institutional settings of the educational systems, which would be neglected in within-country specifications. However, a caveat applies here in that a country’s institutions may be related to unobserved, e.g. cultural, factors which in turn may be related to student performance. To the extent that this may be an important issue, caution should prevail in drawing causal inferences and policy conclusions from the presented results. In terms of time variability, changes in institutions generally occur only gradually and evolutionary rather than radically, particularly in democratic societies. Consequently, the institutional structures of education systems are highly time-invariant and thus most likely constant, or at least rather similar, during a student’s life in secondary school. We therefore assume that the educational institutions observed at one point in time persist unchanged during the students’ secondary-school life and thus contribute to students’ achievement levels, and not only to the change from one grade to the next. Still, institutional structures
International differences in student performance
223
may differ between primary and secondary school, so that issues of omitted prior inputs in a students’ life may still bias estimated institutional effects, generally in an attenuating way. The situation is more problematic for schools’ resource endowments Ris . For example, educational expenditure per student has been shown to vary considerably over time (cf. Gundlach et al. 2001). Still, as far as the crosscountry variation in educational expenditure is concerned, the assumption of relatively constant relative expenditure levels seems not too implausible, so that country-level values of expenditure per student in the year of the PISA survey may yield reasonable proxies for expenditure per student over students’ school life. However, students’ educational resource endowments are not necessarily exogenous to their educational performance. Resource endogeneity in the narrow sense should not be a serious issue at the country level, due to the lack of a supranational government body that would redistribute educational expenditures according to students’ achievement and due to international mobility constraints. But within countries, endogenous resource allocations, both between and within schools, may bias least-squares estimates of the effects of resources on student performance. To avoid biases from within-school sorting of resources according to the needs and achievement of students, Akerhielm (1995) suggests an IV estimation approach that uses school-level variables as instruments for class size. Accordingly, in our regressions we use the studentteacher ratio at the school level as an instrument for the actual class size in each subject in a two-stage least squares (2SLS) estimation.8 However, this approach may still be subject to between-school sorting of differently achieving students based on schools’ resource endowments, e.g. caused by school-related settlement decisions of parents (cf. West and Wößmann 2006). To the extent that between-school sorting is unrelated to the family-background and institutional characteristics for which our regressions control, it might still bias estimated resource effects (cf. Wößmann and West 2006; Wößmann 2005). Furthermore, variation in individual students’ resource endowments over time, e.g. classsize variation, may also bias levels-based estimates, generally resulting in a downward attenuation bias. The PISA data do not allow for overcoming these possibly remaining biases.
4 Estimation results This section discusses the results of estimating (1) for the three subjects. The results are reported in Table 4. The discussion of results only briefly refers to selected control variables (see Fuchs and Wößmann 2004a for a more extensive discussion) and then focuses on the effects of institutional features.
8 Note that this approach also accounts for measurement-error biases in the class-size variable.
224
T. Fuchs, L. Wößmann
Table 4 International education production functions Math Coef. Institutions Testing External exit exam (EEE) Standardized tests School autonomy Determining course content Establishing teachers’ starting salaries Choosing textbooks Deciding on budget allocations within school Formulating school budget Hiring teachers Public vs. private operation and funding Publicly managed school Government funding (share)
Science SE
Coef.
SE
Coef.
SE
−72.766∗∗ 28.517a −79.663∗∗ 30.126a −90.205∗∗∗ 30.350a −7.631∗∗∗ 2.500 −8.757∗∗∗ 2.314 −5.222∗∗ 2.484 −5.488∗∗ −21.559∗∗∗
2.576 −4.831∗∗ 3.231 −12.861∗∗∗
2.266 2.910
1.820 8.245∗
5.472 4.785
4.388 4.750
3.721 9.67∗
6.192 5.391
−5.734∗∗ 6.843∗∗
2.924 2.902
2.697 2.377
−1.12 −0.561
3.112 2.770
2.975 4.079
−15.15∗∗∗ −7.443
3.346 4.733
−19.189∗∗∗ 3.929
4.488 13.039∗∗∗ −5.812∗∗ −4.098∗
3.857 −12.600∗∗∗ 4.926 −3.848
Interactions with external exit exam (EEE) 14.106∗∗∗ Standardized tests x EEE 11.109∗∗∗ 3.668 School autonomy 13.680∗∗∗ Determining course content 16.688∗∗∗ 4.182 x EEE 13.552∗∗∗ Establishing teachers’ starting 27.979∗∗∗ 4.424 salaries x EEE 63.433∗∗∗ Choosing textbooks x EEE 57.898∗∗∗ 10.955 Deciding on budget allocations −3.202 4.378 −0.055 x EEE Formulating school budget 8.513 7.313 2.419 x EEE Hiring teachers x EEE −2.153 4.920 2.266 Public vs. private operation and funding Publicly managed school x EEE 6.424 5.481 2.200 Government funding (share) 0.614 7.782 2.792 x EEE Resources and teachers 3.988 Educational expenditure per 7.908∗∗∗ 2.555a student (1,000$) 0.512 1.446∗∗∗ Class size (m/s/r) 0.879∗ (instr. by student–teacher ratio) Instructional material 1.354 6.401∗∗∗ Not at all lacking 6.159∗ Strongly lacking −11.882∗∗∗ 3.540 −5.430 1.238∗∗∗ Instruction time (1,000 min 0.830∗∗∗ 0.225 per year) (m/s/r) Teacher education (share at school) Masters in pedagogy 1.822 2.660 8.338∗∗∗ 10.484∗∗∗ Teacher certificate 11.178∗∗∗ 3.384 10.101∗∗∗ Masters in subject (m/s/r) 11.847∗∗∗ 2.963 Student characteristics Grade 7th 8th
Reading
−105.234∗∗∗ −77.491∗∗∗
5.693 −76.383∗∗∗ 2.720 −65.305∗∗∗
−8.963∗∗∗ 2.604 −11.514∗∗∗ 3.327
3.599
9.675∗∗∗ 3.667
4.342
18.453∗∗∗ 4.588
4.095 10.182 4.265
9.522∗∗
4.432
69.084∗∗∗ 12.010 −3.863 4.711
7.053
0.995
7.294
4.351
−3.847
4.687
4.531 6.863
6.964 5.273
4.778 7.839
2.594a
−1.667
3.922a
0.499
0.36
0.421
1.297 3.436 0.211
6.848∗∗∗ 1.402 −5.098 3.923 −0.499∗∗∗ 0.178
2.420 3.445 2.719
4.283∗ 2.520 6.471∗ 3.655 17.583∗∗∗ 3.248
4.218 −107.976∗∗∗ 4.880 2.233 −88.668∗∗∗ 2.914
225
International differences in student performance Table 4 continued Math Coef. 9th 11th Country’s school entry age Age (months) Female
−35.110∗∗∗ 28.782∗∗∗ 22.667∗∗∗ −0.845∗∗∗ −16.896∗∗∗
Family background Born in country Student 4.097∗ Mother 5.311∗∗∗ Father 4.163∗∗ Living with Single father 17.461∗∗∗ Single mother 5.998 Both parents 12.532∗∗∗ Parents’ education Primary 11.846∗∗∗ Lower secondary 13.756∗∗∗ Upper secondary 1 17.100∗∗∗ Upper secondary 2 20.063∗∗∗ University 22.596∗∗∗ Parents’ work status At least one half-time 1.769 At least one full-time 15.181∗∗∗ Both full-time 14.857∗∗∗ Parents’ job Blue collar −10.036∗∗∗ White collar 9.054∗∗∗ Books at home 1–10 books 14.216∗∗∗ 11–50 books 28.059∗∗∗ 51–100 books 35.180∗∗∗ 101–250 books 50.039∗∗∗ 251–500 books 59.980∗∗∗ More than 500 books 61.734∗∗∗ International socio-economic 0.428∗∗∗ index (ISEI) School’s community location Small town (3,000–15,000) 2.152 Town (15,000–100,000) 4.140 City (100,000–1,000,000) 6.243∗ City center of city with >1 7.473∗ million people Elsewhere in city with >1 −0.344 million people GDP per capita (1,000$) 0.431 Home Incentives and Inputs Parental support Strongly lacking −19.195∗∗∗ Not at all lacking 12.008∗∗∗ Homework >1 and <3 hours per week 8.551∗∗∗ >3 hours per week 11.387∗∗∗
Science SE
Coef.
1.678 −30.770∗∗∗ 2.543 19.861∗∗∗ 5.886a 14.140∗∗ 0.119 −0.165 0.866 −3.977∗∗∗
Reading SE.
Coef.
1.567 −40.447∗∗∗ 2.744 23.133∗∗∗ 6.328a 15.925∗∗ 0.119 −0.716∗∗∗ 0.821 23.687∗∗∗
SE. 1.721 2.330 7.771a 0.102 0.796
2.175 1.555 1.557
6.639∗∗∗ 7.163∗∗∗ 10.295∗∗∗
2.064 1.637 1.553
11.311∗∗∗ 8.182∗∗∗ 7.573∗∗∗
2.030 1.352 1.359
3.710 4.227 3.741
16.587∗∗∗ 10.296∗∗ 14.993∗∗∗
3.546 4.288 3.639
22.463∗∗∗ 10.851∗∗ 19.317∗∗∗
4.183 4.305 4.071
4.176 4.239 4.345 4.288 4.333
12.718∗∗∗ 12.407∗∗∗ 17.864∗∗∗ 19.915∗∗∗ 24.374∗∗∗
4.298 4.179 4.363 4.301 4.295
18.631∗∗∗ 18.962∗∗∗ 26.556∗∗∗ 26.615∗∗∗ 29.679∗∗∗
4.012 3.985 4.109 3.983 4.012
2.052 1.669 1.697
−1.860 10.469∗∗∗ 11.366∗∗∗
2.064 1.639 1.686
0.057 11.470∗∗∗ 11.527∗∗∗
1.758 1.457 1.493
1.378 1.045
−10.033∗∗∗ 8.161∗∗∗
1.354 0.998
−10.903∗∗∗ 11.284∗∗∗
1.179 0.844
3.378 3.318 3.373 3.444 3.474 3.547 0.033
12.911∗∗∗ 27.961∗∗∗ 35.604∗∗∗ 48.938∗∗∗ 59.853∗∗∗ 61.479∗∗∗ 0.415∗∗∗
3.166 2.970 3.006 3.059 3.121 3.180 0.033
27.492∗∗∗ 43.055∗∗∗ 49.487∗∗∗ 65.547∗∗∗ 76.480∗∗∗ 76.199∗∗∗ 0.548∗∗∗
3.833 3.390 3.230 3.363 3.473 3.489 0.026
2.854 3.048 3.410 4.176
3.457 3.850 6.089∗ 4.846
2.735 2.952 3.237 3.893
2.813 6.600∗∗ 9.999∗∗∗ 11.702∗∗
2.765 2.977 3.289 4.010
3.797
2.060
3.713
6.489∗
3.747
0.635a
1.390∗∗
0.543a
2.178∗∗∗
0.459a
1.941 2.664
−17.995∗∗∗ 7.177∗∗∗
1.859 2.315
−21.286∗∗∗ 8.356∗∗∗
2.037 2.416
0.868 1.122
7.073∗∗∗ 8.407∗∗∗
0.941 1.247
9.046∗∗∗ 5.449∗∗∗
0.693 1.067
226
T. Fuchs, L. Wößmann
Table 4 continued Math
Computers at home One More than one Imputation dummies Students (units of observation) Schools (PSUs) Countries (strata) R2 R2 (without imputation controls)
Science
Coef.
SE
Coef.
−4.246∗∗∗ −11.714∗∗∗ Incl. 96,507 6,611 31 0.344 0.318
1.187 −3.325∗∗∗ 1.172 −12.964∗∗∗ incl. 96,416 6,613 31 0.285 0.258
Reading SE
Coef.
1.104 −5.376∗∗∗ 1.181 −13.445∗∗∗ Incl. 173,618 6,626 31 0.354 0.322
SE
1.056 1.065
Notes: Dependent variable: PISA international test score. 2SLS regression in each subject, with class size instrumented by schools’ student-teacher ratio. Regressions weighted by students’ sampling probabilities. Coef.: Coefficient estimate. SE.: Clustering-robust standard error (taking account of correlated error terms within schools), corrected for imputed data using the Schafer and Schenker (2000) procedure Significance level (based on clustering-robust standard errors): ∗∗∗ 1%:∗∗ 5%:∗ 10%. a Clusteringrobust standard errors (and thus significance levels) based on countries rather than schools as clusters
4.1 Control variables: student, family and school characteristics Dozens of variables of student characteristics, family background and home inputs show a statistically significant association with student performance on the three PISA tests. These include indicators of grade levels, age, gender, immigration status, family status, parental education and work, socio-economic background of the family, community location and home inputs (see Table 4). Given that reading is a new subject on the PISA test, the findings on gender differences seem noteworthy. In math and science, boys perform statistically significantly better than girls, at 16.9 achievement points (AP) in math and 4.0 AP in science. The opposite is true for reading, where girls outperform boys by 23.7 AP. Given that test scores are scaled to have an international standard deviation among OECD countries of 100, estimate sizes can be interpreted as percentage points of an international standard deviation. As a concrete benchmark for size comparisons, the unconditional performance difference between 9th- and 10th-grade students (the two largest grade categories) in our sample is 30.3 AP in math, 32.4 in science and 33.2 in reading. That is, the boys’ lead in math equals roughly half of this grade equivalent, and the girls’ lead in reading roughly two thirds. As an alternative benchmark, when estimating the average unconditional performance difference per month between students of different age and extrapolating this to a performance difference per year of age, this equals 12.9 AP in math, 19.3 in science and 16.4 in reading. With the exception of the gender effect, the results on associations with student and family characteristics in reading are qualitatively the same as in math and science. But most of the family-background effects tend to be larger in
International differences in student performance
227
reading than in math and science. In general, the results are very much in line with results derived from previous international student achievement tests (e.g., Wößmann 2003a). Two categories of variables that have not been available in the previous international achievement tests concern the work status of students’ parents. First, students with at least one parent working full time perform statistically significantly better than students whose parents do not work. However, there is no statistically significant performance difference between students whose parents do not work and students whose parents work at most half-time. Neither is there a statistically significant performance difference depending on whether one or both parents worked full-time. Second, students’ performance varies statistically significantly with their parents’ occupation, expressed as bluecollar and white-collar dummies.9 As throughout the paper, these effects are calculated holding all other influence factors constant. For example, they are estimated for a given level of parental education.10 The general pattern of associations of student performance with schools’ resource endowments and teacher characteristics is that resources seem to be positively related to student performance, once family-background and institutional effects are extensively controlled for. This holds particularly in terms of the quality of instructional material and of the teaching force. By contrast, there are no positive associations with reduced class sizes, and the associations with expenditure levels are shaky and small, not guaranteeing that the benefits of the expenditures would warrant the costs. 4.2 Institutions Economic theory suggests that external exit exams, which report performance relative to an external standard, may affect student performance positively (cf. Bishop and Wößmann 2004; Costrell 1994; Betts 1998). In line with this argument, students in school systems with external exit exams perform statistically significantly better by 19.5 AP in math than students in school systems without external exit exams (based on a specification without interaction terms, not reported in the table). This effect replicates previous findings based on other international studies (Bishop 1997; Wößmann 2003a, b). Wößmann (2003b) shows that the cross-country result is robust to the inclusion of a lot of other systemic and cultural features of a country, as well as to the inclusion 9 White-collar workers were defined as major group 1–3 of the International Standard Classification of Occupations (ISCO), encompassing legislators, senior officials and managers; professionals; and technicians and associate professionals. Blue-collar workers were defined as ISCO 8–9, encompassing plant and machine operators and assemblers; and sales and services elementary occupations. The residual category between the two, ranging from ISCO 4–7, encompasses clerks; services workers and shop and market sales workers; skilled agricultural and fishery workers; and craft and related trades workers. The variable was set to while-collar if at least one parent was in ISCO 1–3, and to blue-collar if no parent was in ISCO 1–7. 10 Parental education is measured by the highest educational category achieved by either father or mother, whichever is the higher one.
228
T. Fuchs, L. Wößmann
of regional dummies, suggesting that it is not strongly driven by severe cultural differences. Also, Bishop (1997) and Jürges et al. (2005) present within-country evidence that students perform better in regions with external exams in Canada and Germany, the two national education systems within which some regions feature external exams and others not. The association between external exit exams and student performance in science is statistically significant at the 11% level. In reading, the relationship is also positive, but not statistically significant. However, there is no direct data available on external exit exams in reading, so that the measure used is a simple mean of math and science. Therefore, this smaller effect might be driven by attenuation bias due to measurement error. Furthermore, the low levels of statistical significance in all three subjects reflects the measurement of external exit exams at the country level, so that there are only 31 separate observations on this variable, which we account for by clustering the standard error of this variable at the country level. Still, the pattern of results provides a hint that external exit exams may be more important for performance in math and science than in reading (cf. also Bishop 2006). At the school level, there is information on whether standardized testing was used for 15-year-old students at least once a year. This alternative measures of external testing is statistically significantly related to better student performance in all three subjects. From a theoretical point of view, one may expect institutional effects to differ between systems with and without external exams. External exams can mitigate informational asymmetries in the school system, thereby introducing accountability and transparency and preventing opportunistic behavior in decentralized decision-making. This reasoning leads to a possible complementarity between external exams and school autonomy in other decision-making areas, the extent of which depends on the incentives for local opportunistic behavior and the extent of a local knowledge lead in a given decision-making area (cf. Wößmann 2003b, c). Therefore, the model reported in Table 4 allows for heterogeneity of institutional effects across systems with and without external exams by including interaction terms between external exit exams and the other institutional measures. The relationship between standardized tests and student achievement indeed differs strongly and statistically significantly between systems with and without external exit exams. If there are no external exit exams, regular standardized testing is statistically significantly negatively related to student achievement in all three subjects. That is, if the educational goals and standards of the school system are not clearly specified, regular standardized testing can backfire and lead to weaker student performance. But the relationship between regular standardized testing and student achievement in all three subjects turns around to be statistically significantly positive in systems where external exit exams are in place.11 That is, what was only hypothesized in Wößmann (2003c), who 11 The statement that the positive relationship between regular standardized testing and student achievement in external-exam systems—whose point estimate can be calculated from the results reported in Table 4 as, e.g., 5.3 = −8.8 + 14.1 in science—differs statistically significantly from zero,
International differences in student performance
229
lacked relevant data to support the hypothesis, is now backed up by empirical evidence: regular standardized examination seems to have additional positive performance effects when added to central exit exams. A similar pattern can be observed for school autonomy in determining course contents. In systems without external exit exams, students in schools that have autonomy in determining course contents perform statistically significantly worse than otherwise. That is, the effect of school autonomy in this area seems to be negative if there are no external exit exams to hold schools accountable for what they are doing. This effect turns around to be statistically significantly positive where schools are made accountable for their behavior through external exit exams. This pattern of results suggests that the decisionmaking on determining course contents entails substantial incentives for local opportunistic behavior as well as significant local knowledge lead (cf. Wößmann 2003b). The incentives for local opportunistic behavior stem from the fact that content decisions influence teacher workloads, which can account for the negative autonomy effect in systems without accountability. The local knowledge lead stems from the fact that teachers probably know best what specific course contents would be best suited for their specific students, which can account for the positive autonomy effect in systems where external exit exams mitigate the scope for opportunistic behavior. The decision-making area of establishing teachers’ starting salaries shows a similar pattern of results. A negative association between school autonomy in establishing teachers’ starting salaries and student performance in systems without external exit exams turns around to be positive in systems with external exit exams in math, and gets reduced to about zero in the other two subjects. In systems without external exit exams, there is no statistically significant association between school autonomy in choosing textbooks and student achievement in either subject. However, there is a substantial statistically significant positive association in systems with external exit exams in all three subjects. The estimated effect sizes seem particularly large, though, which may hint at possible remaining biases. The result pattern reflects the theoretical case where incentives for local opportunistic behavior are offset by a local knowledge lead (Wößmann 2003b). External exit exams suppress the negative opportunism effect and keep the positive knowledge-lead effect. Thus, it seems that the positive association of student performance with school autonomy in textbook choice prevails only in systems where schools are made accountable for their behavior through external exit exams. Economic theory suggests that school autonomy in such areas as process operations and personnel-management decisions may be conducive to student performance by using local knowledge to increase the effectiveness of teaching. By contrast, school autonomy in areas that allow for strong local opportunistic is based on an auxiliary specification (not reported) which defines the interaction terms the other way round and thus allows to give an assessment of the statistical significance of the relationship between regular standardized testing and student achievement in external-exam systems. The same is true for similar statements made below.
230
T. Fuchs, L. Wößmann
behavior, such as standard and budget setting, may be detrimental to student performance by increasing the scope for diverting resources from teaching (cf. Bishop and Wößmann 2004). The result on school autonomy in the process operation of choosing textbooks supports this view. Similarly, school autonomy in deciding on budget allocations within schools are statistically significantly associated with higher achievement in all three subjects, once the level at which the budget is formulated is held constant. This pattern does not differ significantly between systems without and with central exams, although it tends to get stronger in the latter case. The pattern suggests that this decision-making area features only small incentives for local opportunistic behavior, but a significant local knowledge lead (cf. Wößmann 2003b). By contrast, the association between school autonomy in formulating their school budget and student achievement is statistically significantly negative in math and science, again not differing significantly between systems with and without external exams. This combination of effects, which suggests having the size of the budget externally determined while having schools decide on within-school budget allocations themselves, replicates and corroborates the findings reported in Wößmann (2003a). Furthermore, the PISA indicator of within-school budget allocation seems superior to the data previously used in TIMSS, where this indicator was not available and information on teachers’ influence on purchasing supplies was used instead as a proxy. Students in schools that have autonomy in hiring their teachers perform statistically significantly better in math, corroborating theory (Bishop and Wößmann 2004) and previous evidence (Wößmann 2003a). The association is insignificant in reading, though, and it is significantly negative in science, albeit only in systems without external exit exams. PISA also provides school-level data on the public/private operation and funding of schools, not previously available at the school level in international studies. Economic theory is not unequivocal on the possible effects of public versus private involvement in education, but it often suggests that private school operation may lead to higher quality and lower cost than public operation (cf. Shleifer 1998; Bishop and Wößmann 2004), while reliance on private funding of schools may or may not have detrimental effects for some students (cf. Epple and Romano 1998; Nechyba 2000). In the PISA database, public schools are defined as schools managed directly or indirectly by a public education authority, government agency or governing board appointed by government or elected by public franchise. By contrast, private schools are defined as schools managed directly or indirectly by a non-government organization, e.g. a church, trade union or business. The results show that students in privately managed schools perform statistically significantly better than students in publicly managed schools, after controlling for the large set of background features. The effect does not differ significantly between systems with and without external exit exams. In contrast to the management of schools, we find that the share of private funding that a school receives is not significantly related to student performance, once the mode of management is held constant. In PISA, public funding
International differences in student performance
231
is defined as the percentage of total school funding coming from government sources at different levels, as opposed to fees, donations and so on. Thus, the results on public versus private involvement differ between the management and funding of schools. Overall, the share of performance variation accounted for by our models at the student level is relatively large, at 31.8% in math, 25.8% in science and 32.2% in reading (not counting variation accounted for by the imputation dummies). This is substantially larger than in previous models using TIMSS data, where 22% of the math variation and 19% of the science variation could be accounted for at the student level (Wößmann 2003a). In sum, our evidence corroborates the notion that institutions of the school system are important for student achievement. External and standardized examinations seem to be performance-conducive. The effect of school autonomy depends on the specific decision-making area and on whether the school system has external exams. School autonomy is mostly positively associated with student performance in areas of process and personnel decisions, which may contain informational advantages at the local level. By contrast, the association is negative in areas of setting standards and budgets, which may be prone to local rent-seeking activities. Furthermore, the general pattern of results suggests that the effects of school autonomy differ between systems with and without external exit exams. External exit exams and school autonomy are complementary institutional features of a school system. School autonomy tends to be more positively associated with student performance in all subjects when external exit exams are in place to hold autonomous schools accountable for their decisions. This evidence corroborates the reasoning of external exams as “currencies” of school systems (Wößmann 2003c) which ensure that decentralized school systems function in the interest of students’ educational performance. Finally, private school operation is associated with higher student achievement, which is not true for private school financing.
5 Robustness of results To test whether the reported results on institutional effects in our base specification are sensitive to the specific model specification, we have preformed numerous robustness checks in terms of the sensitivity of the imputation, the set of controls and the specific sample. In terms of the sensitivity of imputations, we test our employed method against three alternative treatments of missing data. First, to test whether the reported coefficient for each variable hinges on the inclusion of students with missing data on the specific variable, we re-estimate the reported specification as many times as there are variables in the model, each time omitting the students for whom data is missing on a specific variable. In all cases, the estimates of the alternative model are of similar magnitude and statistical significance to the estimation on the full (imputed) student sample, confirming that reported coefficient estimates are not driven by the employed imputation method.
232
T. Fuchs, L. Wößmann
Second, rather than performing our conditional mean imputation method, we simply impute a constant for each missing value and add, as before, an imputation dummy for each variable that equals 1 if the respective variable has a missing value for the student and 0 otherwise. Again, the alternative procedure leads to estimates that are comparable in terms of magnitude and statistical significance to the base specification, with the sole exception of school autonomy over deciding on budget allocations within school. While the statistically significant positive estimate on this variable remains in the alternative imputation method, its interaction term with external exit exams turns statistically significantly negative. This negative interaction does not survive our more elaborate imputation method. Third, we re-estimate our base specification under omission of the controls of imputation dummies [all terms containing Ds in (1)]. Again, most results are very robust to this alternative specification, which implicitly assumes that observations are missing conditionally at random. Exceptions are that in math, the coefficient estimate on school autonomy in formulating the school budget loses statistical significance, while its interaction term with external exit exams gets statistically significant (negative), and that again only in math, the interaction term of government funding and external exit exams turns statistically significant. It seems that the assumption of data missing conditionally at random would lead to biased estimates in these few cases. The next set of robustness checks relates to the controls included in the model. First, to account for group selection into schools based on observable characteristics, we estimate an alternative specification that controls for the composition of the student population, in terms of family background measures, such as median parental education or socio-economic status at the school level. The results confirm the institutional effects estimated in the base specification, suggesting that they are not driven by the differential composition of the student body. In particular, the performance differential between publicly and privately operated schools is robust to observable school composition effects, corroborating findings by Dronkers and Robert (2003). Of course, the differential may still be affected by selection on unobservable characteristics. The sole difference in findings to the base specification is that in math, the positive association between government funding share and student achievement reaches standard levels of statistical significance. The performance difference between fully privately financed and fully state financed schools is approximately 10 AP. Second, we add a variable on the age of first ability-based tracking into different school types at the country level (from OECD 2001) as an additional institutional control. The coefficient estimates on the tracking variable are very close to zero and statistically highly insignificant, suggesting that tracking does not have a significant association with the level of student performance across countries. Third, we add a variable on teacher salaries, namely annual statutory salaries of teachers in public institutions of lower secondary education after 15 years of experience (measured in PPP, from OECD 2003). They turn out statistically insignificant, which might be related to the fact that internationally comparable salary data are missing for about one third of our sample.
International differences in student performance
233
Fourth, we initially included two additional institutional variables on school autonomy, namely in firing teachers and in determining teachers’ salary increases. These two variables proof highly collinear with school autonomy in hiring teachers and in determining teachers’ starting salaries. Because adding them to the base specification renders them statistically insignificant, we chose to drop them from the specification, noting that their collinear counterparts may capture some of their potential effects. Fifth, it is not obvious that the international socio-economic index (ISEI), which is based on parental occupations, should be included as a separate family-background control. However, all results prove robust to its inclusion or exclusion in the specification. Even the coefficients on the dummies of parents’ jobs show separate predictive power, although they get noticeably larger in a specification without the ISEI control. Sixth, it is also not obvious that the regressions should control for grade levels. Given PISA’s age-based target population, a student’s grade will to some extent be endogenous to her performance, particularly in systems with common grade repetition. Also, international differences in school entry age are only controlled for at the country level. Unfortunately, there is no data on individual school entry age, nor on grade repetition. Thus, to check for robustness of our results, we repeated the regressions without the grade controls. There were few qualitative changes to the presented results, with two noteworthy exceptions. The first is the coefficient on countries’ school entry age, which turns significantly negative in a specification without grade dummies, as might be expected. The second is the coefficient on the share of a school’s government funding, which turns statistically significantly negative in science and reading without grade controls. This change is driven by a negative correlation between grade level and government funding, which reflects the fact that schools that serve higher grades are likely to depend more on private funding. On average, grades 10 and higher receive 8.4% points less public funding than grades 9 and lower. Thus, not controlling for grades might leave the coefficient on government funding biased by sorting of weaker students into schools with a higher share of government funding. Given these reasons to expect the governmentfunding result to be biased without grade controls, we decided to keep the grade controls in the base specification. The final set of robustness checks relates to the definition of the student and country sample. The first two alternative sample specifications also refer to the specific grades included. First, while our base specification dropped the small number of students in grades six or lower and twelve or higher, who seem to be strong outliers, their inclusion in the model does not change any of the substantive results. Second, to reduce the sample of students even further to the grade levels which students should normally attend based on country rules, we re-estimate our model on a sample that excludes all students who are not in the two grades with the largest share of 15-year-olds in each respective country. Again, this alternative sample yields qualitatively the same results as our base model,
234
T. Fuchs, L. Wößmann
indicating that that grade repetition polices, which are less relevant in the twograde specification, do not drive the results of the base specification. Third, restricting the sample of countries to a more homogenous group does not change the qualitative findings, either. We restrict the sample to the group of economically developed OECD countries, which drops the three poorest countries from our base sample (Brazil, Latvia and Russia). Next, we even restrict the sample to countries with a PPP-measured GDP per capita in 2000 of at least 13,000, dropping altogether six countries from our base sample (Hungary, Mexico and Poland in addition to the previous three). It turns out that none of the reported institutional effects are driven by differences between developing and developed countries, but are robust to variation within these more homogeneous groups of countries. Fourth, it seems that the positive association between educational expenditure per student and student performance in math is largely driven by a few countries with very low spending, but is not existent among the developed OECD countries. Once the four countries with particularly low spending levels are excluded from the sample, the coefficient on educational expenditure per student at the country level gets statistically insignificant also in math.12 In sum, the results reported for our base specification prove remarkably robust to changes in the imputation specification, additional controls and different samples. 6 Explanatory power at the country level In our regressions, the five categories of variables—student characteristics; family background; home incentives and inputs; resources and teachers; and institutions and their interactions—all add statistically significantly to an explanation of the variation in student performance. To assess how much each of these categories, as well as the whole model combined, adds to an explanation of the between-country variation in student performance, we do the following exercise. First, we perform the student-level regression reported in Table 4, equivalent to (1), only without the imputation dummies: Tis = Sis β11 + Fis β12 + His β13 + Ris β2 + Is β3 + εis ,
(2)
where the student-background vector B is subdivided into three parts as in Table 4, namely student characteristics S, family background F and home inputs H. 12 The OECD (2001) reports a measure of cumulative educational expenditure per student for 24 countries, which cumulates expenditure over the theoretical duration of education up to the age of 15. Using this alternative measure of educational expenditure, we find equivalent results of a statistically significant relation with math performance, a weakly statistically significant relation with science performance, and a statistically insignificant relation with reading performance. Given that this measure is available for only 24 countries, and given that it partly proxies for the duration of schooling in different countries, we stick to our alternative measure of average annual educational expenditure per student in secondary education.
International differences in student performance
235
Next, we construct one index for each of the five categories of variables as the sum of the products between each variable in the category and its respective coefficient β. That is, the student-characteristics index SI is given by SIis = Sis β11 ,
(3)
and equivalently for the other four categories of variables. Note that, as throughout the paper, this procedure keeps restricting all coefficients β to the ones received in the student-level international education production function, abstaining from any possible effect heterogeneity between countries or levels (e.g., within versus between countries). Finally, we take the country means of each of these indices in each subject, as well as of student performance in each subject, properly weighting the student observations by their sampling probabilities within each country. This allows us to perform regressions at the country level, on the basis of 31 country observations. These regressions allow us to derive measures of the contribution of each of the five categories of variables to the between-country variation in test scores. Note that this whole exercise is not set up to maximize explanatory power at the between-country level, but rather to fully replicate the simple cross-country model that we estimate above. That is, we do not allow for country heterogeneity. We do not distinguish between estimates based on their precision. We exclude the imputation dummies because as statistical controls they obviously do not really represent a part of the theoretical model, although they would increase the apparent explanatory power of our model. The whole point of this exercise is to estimate the very same model as before, at the student level, and then see how much variation is accounted for at the country level. In many respects, the exercise is set up in a way that loads the dice against a strong finding at the between-country level. But despite this setup, our models can account for nearly all of the variation that exists between countries. As reported in Table 5, models regressing the country means of student performance on the five indices yield an explanatory power of between 84.6 and 87.3% of the total cross-country variation in test scores. This is reassuring for our model specification, which leaves little room for substantial unobserved country-specific heterogeneity. Most of the unexplained variation in student-level test scores, which ranges from 67.8 to 74.2% in the regressions of Table 4, thus seems to be due to unobserved within-country student-level ability differences and not due to a country-level component. To assess the contribution of the five indices individually, we perform two analyses. First, we enter each index individually and look at the R2 of that regression, which would attribute any variation that is joint with the other indices to this index if they are positively correlated. Second, we look at the change in the R2 of the model that results from adding each specific index to a model that already contains the other four indices. Note that the latter procedure will result in a smaller R2 than the former if the additional index is positively correlated with the other indices, and a larger R2 if they are negatively correlated.
236
T. Fuchs, L. Wößmann
Table 5 Contribution to explanatory power (R2 ) at the country level Entered individually
Entered after the remaining four categories
Math Science Reading Math Science Reading Student characteristics Family background Home incentives and inputs Resources and teachers Institutions and their interactions Full model
0.198 0.491 0.004 0.240 0.339 0.873
0.263 0.409 0.013 0.283 0.242 0.846
0.245 0.384 0.048 0.126 0.194 0.853
0.024 0.012 0.001 0.285 0.266
0.043 0.114 0.002 0.106 0.292
0.040 0.273 0.001 0.049 0.312
Notes: Dependent variable: Country means of PISA international test scores. See text for details
When each of the five indices is entered individually, student characteristics can account for 19.8 to 26.3% of the country-level variation in test scores, family background for 38.4 to 49.1%, home incentives and inputs up to 4.8%, resources and teacher characteristics for 12.6 to 28.3% and institutions for 19.4 to 33.9%. When entered after the other four indices, the contributions of the first three indices drop considerably. More interestingly, the R2 of entering resources and teacher characteristics drops to 10.6% in science and to 4.9% in reading (it stays at 28.5% in math).13 Institutions account for a R2 of 26.6% in math, 29.9% in science and 31.2% in reading. This shows that both resources and institutions contribute considerably to the international variation in student performance, but that the importance of institutions for the cross-country variation in test scores seems to be greater than that of resources. 7 Conclusion The international education production functions estimated in this paper can account for most of the between-country variation in student performance in math, science and reading. Student characteristics, family backgrounds, home inputs, resources and teachers, and institutions all contribute significantly to differences in students’ educational achievement. The PISA study used in this paper distinguishes itself from previous international student achievement studies through its focus on reading literacy, real-life rather than curriculum-based questions, age rather than grade as target population and more detailed data on family backgrounds and institutions. The PISA-based results of this paper corroborate and extend findings based on previous international studies. In particular, the institutional structure of the education system is again found to be strongly associated with how much students learn in different countries, consistently across the three subjects. Institutions account for roughly one quarter of the international variation in student performance. 13 Note that part of the variation attributed to resources stems from the counterintuitive positive coefficient on class size.
International differences in student performance
237
The main findings are as follows. Our results confirm previous evidence that external exit exams are statistically significantly positively associated with student performance in math, and marginally so in science. The positive association in reading lacks statistical significance, which may however be due to poor data quality on the existence of external exit exams in this subject. As an alternative measure of external examination, regular standardized testing shows a statistically significant positive association with performance in all three subjects. Consistent with theory as well as previous evidence, superior student performance is associated with school autonomy in personnel-management and process decisions such as deciding budget allocations within schools, textbook choice and hiring of teachers (the latter only in math). By contrast, superior performance is associated with centralized decision-making in areas with scope for decentralized opportunistic behavior, such as formulating the overall school budget. The performance effects of school autonomy tend to be more positive in systems where external exit exams are in place, emphasizing the role of external exams as “currencies” of school systems. Finally, students in publicly managed schools perform worse than students in privately managed schools. However, holding the mode of private versus public management constant, the same is not true for students in schools that receive a larger share of private funding. The findings on institutional associations are mostly consistent across the three subject areas. In terms of control variables, as in previous studies students’ family background is consistently strongly associated with their educational performance. We find that the estimated effects of family background as measured by parental education, parental occupation or the number of books at home are considerably stronger in reading than in math and science. Furthermore, while boys outperform girls in math and science, the opposite is true in reading. While smaller classes do not go hand in hand with superior student performance, better equipment with instructional material and better-educated teachers do. Obviously, this research opens numerous directions for future research. We close by naming three of them. First, this paper has focused on the average productivity of school systems. As a complement, it would be informative to analyze how the different inputs affect the equity of educational outcomes. Second, related to educational equity, possible peer and composition effects have been neglected in this paper. Given that they play a dominant role in many theoretical models of educational production, an empirical analysis of their importance in a cross-country setting would be informative, but is obviously limited by the well-known problems of their empirical identification. Finally, an empirical conceptualization of additional important institutional and other features of the school systems could contribute to an advancement of our knowledge. For example, an encompassing empirical manifestation of such factors as the incentive-intensity of teacher and school contracts, other performance incentives and teacher quality is still missing in the literature. Acknowledgments Financial support by the Volkswagen Foundation is gratefully acknowledged. We would like to thank John Bishop, George Psacharopoulos, Andreas Schleicher, Petra Todd,
238
T. Fuchs, L. Wößmann
Manfred Weiß, Barbara Wolfe, the editor and three anonymous referees, as well as participants at the annual conferences of the American Economic Association in Philadelphia, PA, the Australasian Meeting of the Econometric Society in Melbourne, the European Association of Labour Economists in Lisbon, the International Institute of Public Finance in Milan, the German Economic Association in Dresden, the Human Capital Workshop at Maastricht University and at the Ifo seminar in Munich for helpful comments.
Appendix: Data imputation method To obtain a complete dataset for all students for whom performance data are available, we impute missing values of explanatory variables using a set of “fundamental” explanatory variables F that are available for all students. These fundamental variables F include gender, age, six grade dummies, three dummies on which parent the students live with, six dummies for the number of books at home, GDP per capita as a measure of the country’s level of economic development and the country’s average educational expenditure per student in secondary education.14 Missing values for student i of the variable M are imputed by first regressing F on available values (s) of M: Ms = Fs θ + εs .
(4)
Then, the imputed value of M for i is predicted using student i’s values of the F variables and the coefficient vector θ obtained in regression (4): Mi = Fi θ .
(5)
The imputation method for implied variables is a WLS estimation for metric discrete variables, an ordered probit model for ordinal variables and a probit model for dichotomous variables. For metric discrete variables, predicted values are filled in for missing data. For ordinal and dichotomous variables, in each category the respective predicted probability is filled in for missing data. Because this imputation method predicts the expected values of missing data in a deterministic way, standard errors would be biased downward when estimated in the standard way. To overcome the downward bias, we employ the procedure for correction of standard errors suggested by Schafer and Schenker (2000), which provides the following corrected standard error for the coefficient estimate β of any given variable:
σβcorr
σˆ β2imp σˆ β2mis 2 2 2 2 = σˆ β + 2r σˆ β 2 + σˆ β 2 . σˆ β σˆ β
(6)
14 The small amount of missing data on the variables in F is imputed by median imputation at the lowest available level (school or country).
International differences in student performance
239
The corrected standard error consists of three parts. The first part is the standard error estimated in the standard way, which disregards the imputation of some of the data. The second part is a correction term which takes into account the share r of missing values among the total number of observations for this variable, together with the ratio of the variance of the imputed data σˆ β2mis over the variance of the full sample σˆ β2 . The third part is a correction term which takes into account the uncertainty of the imputation model, i.e. the variance of the residuals of the imputation model σˆ β2imp (see Schafer and Schenker 2000 for details). To account for the complex survey design with intra-cluster correlations, all variance arguments in (6)—σˆ β2 , σˆ β2mis and σˆ β2imp —are estimated using a clustered variance structure with sampling weights (see Schafer and Schenker 1997, Sect. 5.4, for details on the extension to weighted estimates with clustering).
References Adams R, Wu M (eds) (2002) PISA 2000 technical report. Organisation for Economic Co-operation and Development (OECD), Paris Akerhielm K (1995) Does class size matter? Econ Educ Rev 14:229–241 Betts JR (1998) The impact of educational standards on the level and distribution of earnings. Am Econ Rev 88:266–275 Bishop JH (1997) The effect of national standards and curriculum-based exams on achievement. Am Econ Rev 87:260–264 Bishop JH (2006) Drinking from the fountain of knowledge: student incentive to study and learn. In: Hanushek EA, Welch F (eds) Handbook of the economics of education. (forthcoming) North-Holland, Amsterdam Bishop JH, Wößmann L (2004) Institutional effects in a simple model of educational production. Educ Econ 12:17–38 Costrell RM (1994) A simple model of educational standards. Am Econ Rev 84:956–971 Dronkers J, Robert P (2003) The effectiveness of public and private schools from a comparative perspective. EUI Working Paper SPS 2003–13. European University Institute, Florence DuMouchel WH, Duncan GJ (1983) Using sample survey weights in multiple regression analyses of stratified samples. J Am Statist Assoc 78:535–543 Epple D, Romano RE (1998) Competition between private and public schools, vouchers, and peer-group effects. Am Econ Rev 88:33–62 Fertig M (2003a) Who’s to blame? The determinants of German students’ achievement in the PISA 2000 study. IZA Discussion Paper 739. Institute for the Study of Labor, Bonn Fertig M (2003b) Educational production, endogenous peer group formation and class composition: evidence from the PISA 2000 study. IZA Discussion Paper 714. Institute for the Study of Labor, Bonn Fertig M, Schmidt CM (2002) The role of background factors for reading literacy: straight national scores in the PISA 2000 study. IZA Discussion Paper 545. Institute for the Study of Labor, Bonn Fuchs T, Wößmann L (2004a) What accounts for international differences in student performance? A re-examination using PISA data. CESifo Working Paper 1235. CESifo, Munich Fuchs T, Wößmann L (2004b) Computers and student learning: Bivariate and multivariate evidence on the availability and use of computers at home and at school. Brussels Econ Rev 47:359–385 Gundlach E, Wößmann L, Gmelin J (2001) The decline of schooling productivity in OECD countries. Econ J 111:C135–C147 Hanushek EA (2002) Publicly provided education. In: Auerbach AJ, Feldstein M (eds) Handbook of public economics, Vol 4. North Holland, Amsterdam, pp 2045–2141
240
T. Fuchs, L. Wößmann
Hanushek EA et al (1994) Making schools work: improving performance and controlling costs. Brookings Institution Press, Washington Hoxby CM (1999) The productivity of schools and other local public goods producers. J Public Econ 74:1–30 Hoxby CM (2001) All school finance equalizations are not created equal. Q J Econ 116:1189–1231 Jürges H, Schneider K, Büchel F (2005) The effect of central exit examinations on student achievement: quasi-experimental evidence from TIMSS Germany. J Eur Econ Assoc 3:1134–1155 Little RJA, Rubin DB (1987) Statistical analysis with missing data. Wiley, New York Moulton BR (1986) Random group effects and the precision of regression estimates. J Econ 32:385– 397 Nechyba TJ (2000) Mobility, targeting, and private-school vouchers. Am Econ Rev 90:130–146 Nechyba TJ (2003) Centralization, fiscal federalism, and private school attendance. Int Econ Rev 44:179–204 Organisation for Economic Co-operation and Development (OECD) (2000) Measuring student knowledge and skills: the PISA 2000 assessment of reading, mathematical and scientific literacy. OECD, Paris Organisation for Economic Co-operation and Development (OECD) (2001) Knowledge and skills for life: first results from the OECD Programme for International Student Assessment (PISA) 2000. OECD, Paris Organisation for Economic Co-operation and Development (OECD) (2002) Manual for the PISA 2000 database. OECD, Paris Organisation for Economic Co-operation and Development (OECD) (2003) Education at a glance: OECD indicators 2003. OECD, Paris Rubin DB (1987) Multiple imputation for nonresponse in surveys. Wiley, New York Schafer JL, Schenker N (1997) Inference with imputed conditional means. Pennsylvania State University, Department of Statistics, Technical Report #97–05 (available at http://www.stat.psu.edu/reports/1997/tr9705.pdf) Schafer JL, Schenker N (2000) Inference with imputed conditional means. J Am Statist Assoc 95:144–154 Shleifer A (1998) State versus private ownership. J Econ Perspect 12:133–150 Todd PE, Wolpin KI (2003) On the specification and estimation of the production function for cognitive achievement. Econ J 113:F3–F33 West MR, Wößmann L (2006) Which school systems sort weaker students into smaller classes? International evidence. Eur J Politi Econ (forthcoming) (available as CESifo Working Paper 1054, CESifo, Munich) Wolter SC, Coradi Vellacott M (2003) Sibling rivalry for parental resources: a problem for equity in education? A six-country comparison with PISA data. Swiss J Sociol 29:377–398 Wooldridge JM (2001) Asymptotic properties of weighted m-estimators for standard stratified samples. Econ Theory 17: 451–470 World Bank (2003) World development indicators CD-Rom. World Bank, Washington Wößmann L (2003a) Schooling resources, educational institutions and student performance: the international evidence. Oxford Bull Econ Statist 65:117–170 Wößmann L (2003b) Central exit exams and student achievement: international evidence. In: Peterson PE, West MR (eds) No child left behind? The politics and practice of school accountability. Brookings Institution Press, Washington, pp 292–323 Wößmann L (2003c) Central exams as the “currency” of school systems: international evidence on the complementarity of school autonomy and central exams. DICE Report – J Inst Comp 1:46–56 Wößmann L (2005) Educational production in Europe. Econ Policy 20:445–504 Wößmann L, West MR (2006) Class-size effects in school systems around the world: Evidence from between-grade variation in TIMSS. Eur Econ Rev 50:695–736
PISA: What makes the difference? Explaining the gap in test scores between Finland and Germany Andreas Ammermueller
Received: 15 December 2004 / Accepted: 15 May 2006 / Published online: 25 September 2006 © Springer-Verlag 2006
Abstract The large difference in the level and variance of student performance in the 2000 PISA study between Finland and Germany motivates this paper. It analyses why Finnish students showed a significantly higher performance by estimating educational production functions for both countries, using a unique micro-level dataset with imputed data and added school type information. The difference in reading proficiency scores is assigned to different effects, using Oaxaca–Blinder and Juhn–Murphy–Pierce decomposition methods. The analysis shows that German students and schools have on average more favorable characteristics except for the lowest deciles, but experience much lower returns to these characteristics in terms of test scores than Finnish students. The role of school types remains ambiguous. Overall, the observable characteristics explain more of the variation in test scores in Germany than in Finland. Keywords Educational production · PISA · Student performance · Decomposition JEL Classification I21 · H52 1 Introduction The publication of the PISA (Programme for International Student Assessment) outcomes led to a public outcry in Germany and to envious gazes towards Finland. While Finland achieved the top rank in reading proficiency and the third and fourth place in mathematics and science, respectively, Germany was
A. Ammermueller (B) Centre for European Economic Research (ZEW), P.O. Box 103443, 68034 Mannheim, Germany e-mail:
[email protected]
242
A. Ammermueller
placed well below the OECD average in all three test subjects. Other European countries like Italy, Spain and Switzerland also showed a low performance in some of the tested subjects (OECD 2001). An intense political debate began in response to the negative assessment of German students’ performance. High performing countries in PISA were chosen as role models for improving the schooling system. The favorite role model in Europe is Finland, due to its high average test scores and their little spread. Especially in Germany, Finland was the country most referred to as an example of an efficient and equitable education system. Before reforming the schooling systems, the reasons for the different performance of countries in PISA need to be analyzed thoroughly. Differences in PISA performance concern the level of the average test scores as well as the dispersion of the score distribution. The question arises whether student and school characteristics are more favorable in the high performing country or if the returns to these characteristics in terms of test scores are more advantageous. For example, the poor German performance could be due to a higher share of students from a lower social background in Germany than in Finland. However, if the assumed negative effect of a poorer social background were smaller in Germany, the overall impact on the average test score could still be comparable to the impact in Finland. The resources and institutional setting of schools might explain the difference in test scores as well. Therefore, the paper examines the differences between the test score distributions in Finland and Germany and decomposes them in order to quantify the different effects. The analysis aims at disclosing possible sources of the mediocre performance in Germany and thereby gives guidelines where improvements of the schooling system are feasible. Finally, these countries allow for comparing the results of a streamed (Germany) to a single type schooling system (Finland). Although there is an extensive body of literature on student performance in Germany, few studies consider the aspects of educational production or provide an in-depth country comparison. Related studies mainly focus on the bivariate correlation between inputs and test scores or use students’ basic cognitive abilities to explain test scores (Baumert et al. 2001; Artelt et al. 2002). Another study that employs multivariate methods uses unprocessed data that ignore the problem of missing values and include no information on school types (Fertig 2003). Dropping students with missing information for some variables is likely to lead to a sample selection bias and neglects the use of the entire set of information that is available for the analysis. Ignoring school types makes an analysis of the diversified German schooling system almost futile. The multivariate analysis conducted in this paper uses a unique dataset with imputed data for missing values and school type information from the PISA extension study for Germany. The main results are that the measurable characteristics of the German students and schools cannot explain the test score gap, except for the lowest deciles of the student population. Instead, the use of resources and the transformation of student and school characteristics are more efficient in Finland. Test scores
Difference in PISA between Finland and Germany
243
of students depend to a higher degree on student background in Germany than in Finland, which leads to the higher inequality in test scores in Germany. The results might imply that streaming in Germany penalizes students in lower school types and leads to a greater inequality of educational achievement. Overall, the observable characteristics explain more of the variation in test scores in Germany than in Finland. The remainder of the paper is structured as follows. The second section introduces the PISA study and describes the data for the two countries of interest. The third section discusses the determinants of educational performance. In the fourth section, the Oaxaca-Blinder and Juhn-Murphy-Pierce decompositions are performed. Finally, the fifth section concludes with a summary of the findings and their political implications. 2 The PISA data The Programme for International Student Assessment (PISA) tested 15-yearold students in the subjects mathematics, science and reading proficiency in the first half of 2000. The goal was not to test only student knowledge but rather their understanding of the subject matter and ability to apply the acquired knowledge to different situations. The testing was conducted by the OECD throughout its 28 member countries plus Brazil, Latvia, Liechtenstein and Russia. Apart from test scores, data from student and school questionnaires were collected. These include information on student background, availability and use of resources as well as the institutional setting at schools (Adams and Wu 2002). For Germany, additional student-level information on school types is taken from an extended version of the PISA study.1 The two data sources were merged on the student level before the information was extracted. For a detailed description of the German PISA study see Baumert et al. (2001) and for an analysis of the Finnish results see Välijärvi et al. (2002). Student test scores are computed according to the item response theory (cf. Hambleton and Swaminathan 1989). They are the weighted averages of the correct responses to all questions belonging to a certain category. The difficulty assigned to a question is its weight. The scores have been standardized with an international mean of 500 and a standard deviation of 100. These weighted likelihood test scores estimate an individual’s proficiency in the respective subject. The values given for the population parameters might slightly differ from other publications (e.g. OECD 2001), which use plausible values as test scores that are drawn from an estimated ability distribution and provide better estimates at the population level. The weighted means and standard deviations of the test scores and background variables are presented in Table A1 in the Appendix. Table A2 displays statistics of selected variables separately for the 1 In Germany, an extended version of the PISA study was conducted on behalf of the states’
education ministers. However, the so-called PISA-E data is not well-suited for a comparison to the Finnish data due to the huge difference in sample size and missing information in the publicly available data-files.
244
A. Ammermueller
different school types in Germany. The characteristics of students and schools vary greatly between school types. It is thus necessary to take into account school types in an analysis of the German schooling system. The standard deviations show that the variation within school types is also high, except for the school being public or not. In Finland, 4,855 students in 154 schools participated in PISA 2000 and completed a reading proficiency and mathematics or science test. In Germany, 4,917 students in 219 schools participated. Together with the background information that is provided, the PISA data are the most recent and detailed data on student performance for the two countries and are internationally comparable. The data are clustered due to the sampling design of the study. The schools that participated have been chosen first, before a random sample of the student target population was drawn. Therefore, the schools and not the students are the primary sampling units. Missing values for student and school background variables are the main problem of the data. For Germany and Finland, up to 16% of key variables such as parents’ education are missing. Table A3 presents the percentage of missing values. Commonly, the whole observation (student) is dropped from the regression if the value of any explanatory variable is missing. This leads to a great reduction in the number of observations that can be used for the estimations, around 43% in Finland and 60% in Germany. Apart from losing valuable information, dropping students with incomplete answers to the questionnaires leads to a sample selection bias if the values are not missing randomly. Indeed, given that attentive students are more likely to both complete questionnaires and to answer the test questions, low performing students have a higher probability of being dropped. Thus, dropping observations with missing values would lead to an upward bias in the test scores, which can be seen in Table A1. The approach chosen here to overcome the problem of missing data is to predict missing values on the basis of regressions on those background variables like age, sex and the grade a student is in that are available for most students. Linear models are used for continuous variables and probit and ordered probit models for qualitative variables. Students who did not answer these elementary background questions or did not complete the tests are excluded from the regressions, as well as students with more than ten missing values.2 This applies to less than one percent of the sample but leads to a significant increase in mean test scores and a lower standard deviation in Germany. The descriptive statistics and the regression results are also given for the original data without imputed values in Tables A1 and A2, respectively, where all students with at least one missing value are dropped.
2 Moreover, students with an unrealistically low score of below 200 points (26 students), students
from one school in Finland with identical test scores (5 students) and students not in grade 8, 9 or 10 (65 students) were dropped from the regressions. The exclusion of outliers is necessary so that the analysis is not dominated by a small and unrepresentative subsample of the student population.
245
Difference in PISA between Finland and Germany .005
Density
.004 .003 .002 .001 0 0
200
400
600
800
1000
600
800
1000
800
1000
Math Score .005
Density
.004 .003 .002 .001 0 0
200
400
Science Score
Density
.005 .004 .003 .002 .001 0 0
200
400
600
Reading Score FIN
GER
Fig. 1 Test score distributions
The prediction of missing values on the basis of regression results is clearly an imperfect solution. The variation of the variables decreases, as can be seen in the lower standard deviations of the variables including the imputed values as compared to the original data. However, the imputed values vary greatly as well and the information of the non-imputed values of the observation is not lost. 2.1 Distribution of test scores In this part, the distributions of test scores for Finland and Germany will be presented graphically. For each subject, non-parametric kernel density estimates describe the score distribution of the two countries.3 3 The bandwidth was chosen using Silverman’s rule of thumb (cf. Silverman 1986).
246
A. Ammermueller
Figure 1 displays the test score distributions for the three subjects for both Finland (FIN) and Germany (GER). The Finnish scores are on average higher than the German scores, which can be seen in the more right position of the Finnish distribution and the higher weighted average score. The peak of the Finnish distribution is to the right of the German distribution, which reflects the higher mode of the kernel density estimates. Moreover, Finland has not only a higher share of good students but especially fewer low and very low performing students than Germany, which has a relatively fat left tail. Despite the higher average scores, Finland has a lower standard deviation of scores. This pattern holds for all three subjects in which the students have been tested. Finland exhibits more desirable characteristics in its test score distributions, namely higher average scores, a higher mode and a lesser spread of test scores in all subjects. The question arises if the reasons for this great difference in performance between Finnish and German students can be identified. The subsequent analysis focuses on the reading proficiency of students because the respective test scores are available for all students. The distributions of the other test scores suggest no important difference between the three subjects. The total reading score gap between Finland and Germany is shown in Fig. 2. Students’ sampling probabilities are taken into account when dividing the student distribution into deciles. The gap averages 54 points but is declining strongly along the deciles of the distribution. While it is 75 points for the lowest performing decile of students, it decreases to 30 points for the highest performing 10% of students. The difference in test scores is significant at each decile. In particular, the relatively low performance of the lower part of the German student distribution seems to be responsible for the low average score compared to Finland. However, also the highest scoring German students do not attain the same level as the equivalent Finnish students. The inequality in the test score distribution is hence much higher in Germany than in Finland. In the next step, the factors that affect the test scores in either country are analyzed.
3 Determinants of reading proficiency scores 3.1 The production function approach A thorough comparison of student performance in the schooling systems of the two countries presupposes knowledge of the process by which education is produced. Educational production functions provide a means for understanding the production process by estimating the effects that various inputs have on student achievement. For the production function to yield unbiased estimates of the effects, all current and prior inputs into the education system that are likely to determine educational performance should be included in the production function. The cross-sectional PISA data give information on the background of each student, the current resources available to schools including teacher characteristics as well as the institutional setting at the school level. However,
247
60 40 20 0
Total score gap
80
Difference in PISA between Finland and Germany
1
2
3
4
5
6
7
8
9
10
Decile Fig. 2 Total reading score gap
no information on prior achievement of students or inputs into educational production at another point in time is available. Therefore, the estimation of educational production functions like the following is limited by missing information on prior inputs (Todd and Wolpin 2003): Tis = β0 + Gis β1 + Bis β2 + Rs β3 + Is β4 + Ss β5 + υS + εis
(1)
where Tis is the test score of student i in school s, Gis comprises grade level dummies and student’s age in months, Bis is a set of student background variables, Rs is a set of variables on school resources, Is represents institutional variables, Ss school type variables and νs and εis are the error terms at the school and student level. The groups of parameters β0 to β5 are to be estimated. The dummies for grades eight and ten estimate the advances that have been made by students in reading literacy compared to grade nine, which is the reference group. In Finland, students are either in grade eight or nine while in Germany some students are also in grade ten. The continuous variable age is measured in months and mostly captures the effect of grade repetition. German students are on average slightly older and in a higher grade than Finnish students. Besides innate ability, which cannot be measured, student background has been shown to be the most decisive factor in explaining student performance (cf. Hanushek and Luque 2003; Woessmann 2003). The background Bis includes personal characteristics like sex and information on parents’ origin, education and the number of books at home. These variables are unlikely to change over time and may serve as a good proxy for prior inputs. Their effect on the cognitive achievement of students can therefore be interpreted as a causal relationship. However, the total effect of student background on student performance is underestimated by β1 if there is an indirect effect via the school type Ss . This is the case when the allocation of students to schools does not depend only on innate ability but also on parental background. Therefore, the coefficients of
248
A. Ammermueller
student background should better be interpreted as the lowest boundary of the effects, especially for Germany with its many schooling types. The current resources Rs describe parts of the schooling system that depend mainly on financial investments from the public side. The teacher/student ratio at the school level is used to measure the input of teachers for each student. Actual class size would have the advantage of measuring this input more directly but the estimate of class size is likely to be biased. For the class size estimate, selection of students within schools adds to the problem of selection between schools (cf. West and Woessmann 2006). Indeed, low performing students might be put in smaller classes in order to foster their learning. Under the assumption that students do not switch schools and that the teacher/student ratio is roughly constant over time, the current ratio is a reasonably good proxy for teacher input per student over the last years. Further resource variables at the school level are the yearly amount of instruction time, the percentage of teachers with completed university education and a dummy variable on whether principals state their school lacks material or not. The institutional setting is the framework within which the different players involved in schools act. It may affect the motivation and incentives, especially of students and teachers (cf. Bishop and Woessmann 2004). The variables describe whether the school is private or public, standardized tests are used more often than once a year to assess student performance, schools are allowed to select their students and the school’s autonomy in formulating the budget and allocating financial resources. This information is obtained from the school questionnaire completed by the head of school. As institutional reforms take a long time to be implemented, the current institutional setting should accurately describe the setting over the last years, assuming that students stay in the same school. In Finland, students usually stay in the same comprehensive school over the entire period of compulsory education of 9 years while German students commonly change school after 4 years of elementary school.4 Given that the tested students have already spent 4–5 five years at their secondary school, the effect of the former elementary school should be negligible. Under the mentioned assumptions, the coefficients for resources and the institutional setting of the school can be interpreted as causal effects, especially since we control for school types. Finally, the school type variables indicate the type of schools a student attends. This can be five types in the case of Germany and only one type in Finland. German students are allocated to secondary school types after their fourth school year according to their performance in elementary school.5 Assuming that innate ability of students in the fourth and ninth school year is not independent, there is a problem of endogeneity between school type and student performance because both are determined by innate ability. Moreover, 4 Information on the educational systems is taken from Eurybase (2003). 5 Teachers at elementary school write recommendations for each student, and then parents have to
apply at schools. Only the degree from the higher secondary school (Gymnasium) allows attending university. The vocational school (Berufsschule) is for students in an apprenticeship.
Difference in PISA between Finland and Germany
249
as educational performance in elementary school, the preference for school types and thus the allocation to a school type are also determined by student background, the school type coefficient might include a part of the student background effect on student achievement. Hence, the school type coefficient consists of the ‘true’ school type effect, an effect of sorting by innate ability and an additional impact of student background on student performance via school type. The coefficient β4 can therefore only be interpreted as a partial correlation. Note that the specification of the production function is chosen in such a way that all coefficients of the same category of variables are expected to have the same sign. Student background variables are expected to have negative coefficients while resource and institutional variables are expected to have positive coefficients. This is important for the identification of the effects in the decomposition in the following sections. Only for the grade level, grade nine and not grade ten has been chosen as a reference category because there are no students in grade ten in Finland. 3.2 The estimated effects The effect of the characteristics on student performance is estimated in a regression of the explanatory variables on the individual student test score separately for Finland and Germany (see Eq. (1)).6 Due to the clustered design of the PISA data, survey regressions are used for the estimation. These correct the standard errors for the clustered data design, implying an interdependence of error terms between students within the same school. As students from different schools have different sampling probabilities, the sampling weights available in the data are used for the estimation of Eq. (1). The outcomes of the weighted survey regressions with the dependent variable reading proficiency score are presented in Table A3. Including the imputed values for the estimations does not affect the qualitative interpretation of the results but makes them more representative of the student population. The R2 of the regressions indicates that more than half of the variation in the German test scores can be explained but only 17% of the Finnish variation. The performance of students in Germany depends therefore more highly on factors that can be observed and far less on innate ability and other unobserved factors than in Finland. Even when the school type variables are not included in the regression for Germany, the R2 of 0.34 is still twice as large as for Finland. An additional year of schooling adds between 38 points in Germany and 48 points in Finland to the test score, which is almost half a standard deviation. This magnitude makes clear how large the test score gap of 54 points between Germany and Finland actually is. The coefficient for age is negative and much larger in absolute terms in Germany, which might be explained by 6 Characteristics here imply all measurable characteristics, including the grade level, student background, resources, institutional setting and school types.
250
A. Ammermueller
the higher share of repeat students in Germany as compared to Finland. The student background coefficients are highly significant and have a high impact on student performance.7 For example, students whose parents do not have completed secondary education score 35 points lower in Germany, respectively 26 points lower in Finland compared to students whose parents have completed tertiary education, all else equal. The penalty for an unfavorable student background is higher in Germany than in Finland for this example. Girls perform significantly better than boys, in particular in Finland. Students who were born abroad or whose parents immigrated score lower than comparable non-immigrated students, especially in Finland where the share of these students is only 3% compared to 20% in Germany. The number of books at home has a highly significant and large effect on performance. Due to the large share of migrant students, i.e. students of which at least one parent was born abroad, in Germany compared to Finland, the question arises whether German students can be treated as a homogeneous group. When the production functions are estimated separately for migrants and non-migrants in Germany, the regression results show that the coefficients differ only for two variables between the two groups of students at the 5% significance level. The variables are “no secondary education” and “gymnasium” and the effects are higher for migrant students. The coefficients for resource variables are never significant. The teacher/student ratio and having no lack of material are not significantly related to test scores. A high share of teachers with tertiary education leads to a non-significant increase in test scores in Germany but not in Finland. The variables describing the institutional setting are not significant except for the power of schools to select their students in Germany. The variation of the institutional setting within countries is not very large however, so that inter-country comparisons are more suited for analyzing their effects (cf. Woessmann 2003). School types exhibit highly significant effects in Germany in reference to comprehensive schools. Students who attend a low (high) secondary school score 34 (85) points lower (higher) than comparable students in comprehensive school in Germany. After having shown the determinants of student performance in the two countries, the following section compares the results more systematically by decomposing the score gap between Finland and Germany into different effects. 4 Explaining the test score gap The difference observed between the test score distributions in Finland and Germany may be due to several reasons. First, Finnish students may have a more favorable endowment in characteristics measured by the explanatory variables. Finnish students might for example have better educated parents, more resources at schools and a more favorable institutional setting. 7 This is confirmed by the marginal effects and is also the result of studies using TIMSS data (e.g. Ammermueller et al. 2005).
251
Difference in PISA between Finland and Germany
Second, the effects of the different characteristics on the performance of students might differ between the two countries. In other words, the same characteristics might be less efficient in producing reading literacy in Germany than in Finland. Third, a part of the test score gap may be due to the difference in the residuals of the estimated regressions. Any unobserved factors that affect test scores, foremost innate ability of students and their motivation, constitute the residual effect. As the expected value of the residuals is zero, the residual effect is only important when we consider the test score gap along the score distribution and not at the mean. These three effects, referred to as the characteristics, the return and the residual effect can be quantified by decomposition methods. Two different methods will be employed: the Oaxaca–Blinder (Sect. 4.1) and the Juhn–Murphy–Pierce (Sect. 4.2) decomposition. 4.1 Oaxaca–Blinder decomposition This ‘classical’ decomposition technique has been developed by Blinder (1973) and Oaxaca (1973) and splits a gap into two parts. The first part is explained by differences in characteristics, the second part by differences in returns to those characteristics. However, the technique considers only the average effects, ignoring differences along the distribution like its dispersion and skewness. The latter aspects will be examined in Sect. 4.2. The decomposition method used here differs slightly from the classical Oaxaca–Blinder decomposition and follows Lauer (2000). As the aim of the analysis is to explain the low performance of German relative to Finnish students, the different effects that explain the difference in scores are considered from the point of view of German students. The total score gap between Finland and Germany at the mean is defined as T = T¯ F − T¯ G
(2)
where the bars denote weighted averages and the superscripts ‘F’ and ‘G’ the countries Finland and Germany, respectively. The total score gap can then be decomposed into a characteristics, a return and a characteristics-return effect, using the estimates of the weighted survey regressions presented in the previous section: T =
5 i=1
¯ iF − X ¯ iG ) + βiG (X
5 i=0
¯ iG + (βiF − βiG )X
5
¯ iF − X ¯ iG ) (βiF − βiG )(X
(3)
i=1
where X comprises the five categories of explanatory variables G, B, R, I and S. The first component on the right-hand side of Eq. (3) is the characteristics effect, which is the sum of the characteristics effects for all five categories of explanatory variables i = 1, .., 5 included in Eq. (1). It measures by how much
252
A. Ammermueller
German students would score differently if they had the same characteristics as Finnish students, given their estimated returns to characteristics in terms of scores. The second component, the return effect, shows by how much German students would hypothetically perform better, if they experienced the same production process of schooling, i.e. the same transformation of inputs into educational achievement as Finnish students, given their own characteristics. The final characteristics-return effect is an interaction between the impact of a possibly better production process and different characteristics in Finland. The gap between the weighted average reading scores amounts to 54.29 points, as can be seen in Table 1, which presents the results of the decomposition. The analytical standard errors are computed according to Jann (2005). The difference in mean scores is substantial since it is more than half of the international standard deviation of the PISA scores and is 44% higher than the effect of being in the ninth instead of eighth grade in Germany. The total characteristics effect is significantly negative, implying that German characteristics are actually more favorable than Finnish characteristics. The overall return effect of 58 points is significantly positive. The transformation of given inputs in Finnish schools results in higher student test scores than in Germany, which explains part of the score gap. The interaction effect amounts to 33 points. A separation of the effects into the five groups of explanatory variables, grade level and age, student background, resources, institutions and school types, shows a more differentiated picture. The share of students in higher grades is larger in Germany than in Finland, leading to the significantly negative characteristics effect of the grade category. While student background hardly differs between the samples of Finnish and German students, the difference in resources explains about four percent of the positive gap in test scores but is not significant. The average teacher/student ratio as well as the percentage of high educated teachers is higher in Finland than in Germany. Instead, the negative effects for institutions and school types imply more favorable characteristics for German students. In particular, the share of schools that are allowed to select their students is higher in Germany than in Finland.
Table 1 Results of Oaxaca–Blinder decomposition of gap in reading test scores
Total gap Characteristic effect Return effect Interaction effect
Sum
Grade
Student Background
Resources
Institutions
Schools
Cons
54.29 (5.92) −36.39 (8.42) 57.81 (6.49) 32.87 (9.03)
−7.54 (1.26) 1.06 (4.07) 10.25 (1.09)
−1.32 (1.44) −28.05 (6.24) 4.73 (1.91)
1.94 (3.40) 10.80 (29.90) −5.84 (5.47)
−9.24 (4.04) −7.76 (7.43) 3.50 (4.96)
−20.23 (5.81) −20.23 (5.81) 20.23 (5.81)
101.99 (31.89)
Standard errors in parentheses
Difference in PISA between Finland and Germany
253
The highly positive return effect is mainly driven by the difference in the intercepts and the resource variables. Resources are transformed more efficiently into student test scores in Finland than in Germany. However, as the return effect for the resource variables is not significant, the results have to be interpreted cautiously. The transformation of personal and family characteristics is more beneficial for German than for Finnish students, which more than offsets the positive effect for resources. However, this is due to the school type dummies that are included in the regression. The findings of a decomposition using only student background variables as explanatory variables are presented in Sect. 4.2.2. The highly positive difference of the intercepts between Finland and Germany depends on the choice of the reference categories in general and in particular on the choice of the reference school type. When the comprehensive Finnish schools are compared to a higher scoring German school type instead of the German comprehensive schools, the difference between the intercepts decreases. The interaction effect shows that the interaction between better characteristics and a better production process benefits Finnish students relative to German students for all categories of variables except for resources. Table 1 considers all coefficients for the decomposition, even when the difference between coefficients in the two countries is not statistically significant. Table A4 presents the decomposition results when those coefficients that do not differ at the ten percent-significance-level are restricted to be equal.8 Accordingly, only effects that significantly differ between countries are taken into account. These variables are: student age, sex, parents’ origin, parents’ higher sec. education and all school types. For this adapted analysis, the effects in Table A4 differ only greatly for the resource and institutional variables, for which no coefficients differ significantly between the two countries. The sum of the effects hardly changes. When the average of the distribution is considered, the difference in characteristics cannot explain the better performance of Finnish students. According to this decomposition, the poor transformation of the available resources and the difference in the unobservables in the intercepts constitute the main reason for the relatively low scores of German students.
4.2 Juhn–Murphy–Pierce decomposition Until now, only the mean of the distribution has been considered. However, as shown in Sect. 2, the distribution of scores differs between the two countries. Therefore, the decomposition will be performed along the entire score distribution as well. 8 The effects for the two countries have been estimated simultaneously using interaction terms to
see if the coefficients for the countries differ significantly. The interaction terms that are not significant have then been dropped. Reducing the significance level to five percent leads to a further reduction of coefficients.
254
A. Ammermueller
The following decomposition technique was first employed by Juhn et al. (1993) for a decomposition of change across time. It is also applicable to crosssection data (e.g. Blau and Kahn 1992), like the PISA data. The method has the distinct advantage of considering not only the mean for the decomposition but the whole distribution. Moreover, it deals explicitly with the residuals from the estimation of the production function, which are equal to zero at the mean but not at specific quantiles. Following a slightly different approach, it allows one to decompose the score gap into a characteristics, return, characteristics–return and residual effect. The residual εi of country y can be thought of consisting of two components: the percentile of an individual i in the residual distribution θi , and the distribution function of the residuals, Fi . The inverse cumulative residual distribution function then gives us: y
y
y
εi = F y−1 (θi |Xi ),
(4)
where X comprises the five sets of explanatory variables G, B, R, I and S of country y. Using the estimates from weighted survey regressions of Eq. (1), the actual and two hypothetical test score distributions for German students can be constructed: −1
GERi = β G XiG + F G (θiG |XiG ) −1
GER(1)i = β F XiG + F F (θiG |XiG ) −1
GER(2)i = β G XiG + F F (θiG |XiG )
(5) (6) (7)
The first hypothetical distribution GER(1) shows the scores that German students would attain if they experienced the Finnish production process and the corresponding residuals from the Finnish residual distribution. Equation (7) presents the second hypothetical distribution GER(2), which assumes that the characteristics of German students are transformed into test scores by the German returns, but that the residual distribution is the same as for Finnish students. The two hypothetical Finnish distributions are created likewise. Following the decomposition as described in Eq. (3), the characteristics effect is the difference between the test score distributions for FIN(1) and GER. The return effect equals the difference between the two hypothetical test score distributions GER(1) and GER(2). The third effect is due to the different distribution of residuals in the two countries and can be calculated by subtracting GER from GER(2). The interaction effect can then be constructed as (FIN– FIN(1))–(GER(1)–GER). Adding up all four effects leads to the total gap (FIN–GER) that shall be explained here. The resulting score distributions are discussed in the following section.
Difference in PISA between Finland and Germany
255
4.2.1 The hypothetical score distributions In order to show the different effects graphically, Fig. A1 in the Appendix displays the real and hypothetical test score distributions. The difference between the reading score distributions as estimated by the kernel density function are only due to one of the following effects. The first effect is the characteristics effect. The upper left hand graph of Figure A1 shows the hypothetical Finnish distribution FIN(1) and the actual German distribution GER. The former one displays how the distribution would look when Finnish students with their own characteristics experienced the German returns to these characteristics and the German residuals given their position in the Finnish residual distribution. The difference that remains between the two distributions is only due to differences in characteristics between the two countries, given the German educational production process. The mode of the hypothetical Finnish distribution is positioned to the left of the German distribution, which has a higher spread and is slightly skewed to the left. This implies that most German students actually have the more favorable (according to the estimation results) and heterogeneous characteristics of the two student samples. The characteristics effect thus implies higher average test scores for German than for Finnish students. However, in the lower part of the distribution, the size of the effect decreases and implies higher scores for Finnish than for German students. This is consistent with the slope of the total score gap over the distribution but contradicts the positive sign of the Finnish–German score gap. The return effect is shown in the upper right-hand graph. The hypothetical distribution GER(1) shows the predicted scores for German students that experience the Finnish production process including Finnish residuals. Distribution GER(2) displays how German students in German schools would perform if they had Finnish residuals. The difference between the distributions is only due to differences in the production of education in the two countries, given the German characteristics. The production process in Finnish schools clearly leads to a better performance of students, especially for the lower part of the distribution. The return effect can hence explain why German students are performing worse than Finnish students. The residual effect is depicted in the lower left-hand graph of Fig. A1, where the distributions GER(2) and GER are compared. The Finnish residuals in GER(2) lead to a wider distribution than the German residuals, which are quite dense. This is consistent with the earlier results on the determinants of test scores, which showed that the observable factors can explain a higher share of the variation in test scores in Germany than in Finland. Consequently, unobservable factors like innate ability of students have a greater effect in Finland, which is implied by the residual effect. The last effect in Fig. A1 is the interaction effect, showing the interaction between the possibly better production process and characteristics of Finnish students and schools. The effect is positive. The hypothetical distributions showed the predicted test scores for German and Finnish students had they experienced another educational system. In the following section, a closer look
256
50 0 -50 -100
Reading Score
100
A. Ammermueller
1
2
3
4
5
6
7
8
9
10
Decile Total score gap Return effect Interaction effect
Char. effect Residual effect
Fig. 3 Total effects of decomposition
is paid to the different effects and the contribution of the different groups of variables to the effects.
4.2.2 The effects and their components The four effects, which make up the total score gap between the countries, can be broken down further and linked directly to the five groups of variables that determine student performance. First, the course of the aggregated effects is shown over the deciles of the test score distribution. Fig. 3 displays the total score gap and the total effects. The characteristics effect can explain part of the test score gap, but only for the lowest three deciles of the test score distribution, where it is positive. It decreases steadily, implying that the characteristics of German students are deteriorating comparatively more when going down the score distribution. This shows a higher inequality in characteristics in Germany than in Finland. The return effect decreases as well over the distribution, but is always positive. The problem of converting the given endowments in Germany into high performance of students is thus greatest for the weakest students. The residual and interaction effect run almost parallel and oppose the other effects. They increase over the whole distribution and are positive from about the fifth and third decile upwards, respectively. The increase in the residual effect is caused by a steeper rise in the Finnish residuals that are first smaller and then higher than the German residuals. This implies that unobservable factors explain more of the variation in test scores in Finland, which can also be seen in the third part of Fig. A1, since students at the bottom of the distribution have lower residuals and students at the top have higher residuals than the corresponding German students. The interaction effect increases because the German characteristics deteriorate more when moving down the score distribution. Since most of the estimated coefficients of the production function are negative, a greater
257
0 -50 -100
Reading score
50
Difference in PISA between Finland and Germany
1
2
3
4
5
6
7
8
9
10
Decile Grade Resources School types
Student background Institutions
Fig. 4 The components of the characteristics effect
inequality is represented by faster decreasing endowments along the distribution in Germany than in Finland. Now we turn to the composition of the effects. Figure 4 displays the five components of the characteristics effect. While the positive effect of resources and the negative effect of institutions does not vary greatly over the distribution, the effects of grade level and student background decrease and turn from positive for the lowest three deciles to negative.9 Low performing students have less favorable characteristics in Germany than in Finland while the opposite is the case for high performing students. The characteristics effect for the variables describing the type of school is also positive for the lower part of the distribution but decreases very strongly up to –70 for the highest decile. The streaming of the schooling system is hence associated with a greater inequality between students in Germany than in Finland because it introduces an additional source of test score variation, which is also shown in Table A2. The level of the characteristics effect for school type variables depends on the choice of the reference school type. Figure 5 shows the return effect separately for each group of variables. The negative effect for student background variables implies that these characteristics are transformed into higher test scores in Germany than in Finland. This is mostly due to the larger negative coefficients for student’s sex and parent’s origin in Finland. However, it remains unclear how much of the effect of student background on performance is hidden in the school type variables in Germany (see Sect. 3.1). Resources are used more efficiently in Finland than in Germany along the whole score distribution and can hence explain the score gap partly. Institutional variables contribute slightly negatively to the return effect. The return effect for school type variables decreases along the score distribution and is mostly negative. Since Finnish students are all in comprehensive schools, 9 The student background variables that change the most along the score distribution are parents’ education and books at home and less the personal characteristics like student’s sex.
258
50 0 -50 -100
Reading score
100
A. Ammermueller
1
2
3
4
5
6
7
8
9
10
Decile Grade Resources School types
Student background Institutions Intercept
Fig. 5 The components of the return effect
the effect reflects only that students in the lowest three deciles are more likely to attend low secondary schools (Hauptschule) in Germany, which have a negative effect compared to the reference group of comprehensive schools, while higher performing students are more likely to be in medium (Realschule) or higher (Gymnasium) secondary schools. The difference in the intercept is positive, implying that the level of test scores is higher in Finland than in Germany due to unobserved factors and the choice of the reference school type. The decomposition was also performed only for the grade level and student background variables in order to test the robustness of the results when comparing a schooling system where streaming is used (Germany) to a single school system (Finland). The findings are in line with the previous results. The only difference is that there are a negative characteristics and a positive return effect for the group of student background variables. When looking at the individual student background variables and the grade level, the characteristics in Germany are less favorable compared to Finland concerning student’s age and parents’ origin but more favorable concerning the grade level, parents’ education and number of books at home. The return effect implies that there are more favorable returns in Germany concerning the sex of students and parents’ origin but unfavorable returns to student’s age, parents’ education and the number of books at home than in Finland. This shows once again that German students are “punished” by their school system far stronger for a low social background than are Finnish students. A possible further step in the decomposition analysis would be to consider the estimated coefficients along the conditional distribution, not only at the mean of the distribution (cf. Garcia et al. 2001). However, most coefficients estimated by quantile regressions do not differ significantly from OLS coefficients. Only the coefficient for student’s age increases while the coefficient for student’s sex decreases slightly but significantly over the conditional test score distribution for Germany. Overall, a decomposition using quantile regressions apparently does not add further insights.
Difference in PISA between Finland and Germany
259
5 Conclusion The decomposition analysis showed that the low performance of German students compared to Finnish students is not due to less favorable student background and school characteristics, except for the bottom of the score distribution. German students have on average more favorable characteristics but experience much lower returns to these characteristics in terms of test scores than Finnish students. This holds in particular for the social background of students but not for student characteristics such as sex and the grade level of students. The background of German students changes much faster along the score distribution, which explains the higher inequality in Germany. The institutional setting seems to be more favorable in Germany while Finland is endowed with slightly more resources. A large part of the overall score gap between the countries is due to unobservable factors. The results might imply that streaming in Germany penalizes students in lower school types and leads to greater inequality of educational achievement. It remains unclear, however, if this can be attributed to the effect of school types per se or student background and innate ability that determine the allocation process of students into school types. Overall, the variation in test score can be explained much better by observable characteristics in Germany than in Finland. In order to improve the performance of students in Germany, especially the educational achievement of students in the lower part of the test score distribution has to be promoted. These students suffer from a highly disadvantaged student background, whose negative impact upon performance might be intensified by the early streaming in the German schooling system at the age of ten. They have little possibility to compensate for their background before they are divided into different school types. There is no evidence for a beneficial effect of higher teacher/student ratios while a higher education of teachers seems to benefit students in Germany. Further research is needed on the effects of school types in educational production functions, which should try to isolate the ‘true’ effect of school type on educational achievement. Only then the determinants of educational achievement can be precisely estimated for schooling systems that use streaming. Acknowledgements I would like to thank Charlotte Lauer, Denis Beninger, Hans Heijke, Peter Jacobebbinghaus, François Laisney, Sybrand Schim van der Loeff, Britta Pielen, Axel Pluennecke, Manfred Weiss, two anonymous referees and the editor as well as participants at the EALE conference in Lisbon for constructive comments. Financial support from the European Commission IHP project “Education and Wage Inequality in Europe” contract number HPSE-CT-2002-00108 is gratefully acknowledged. The data are readily available for purposes of replication. The usual disclaimer applies.
Appendix See Tables A1–A4 and Fig. A1.
Secondary ed. 2 Secondary ed. 3 Tertiary ed. (Ref.) Books cat. 1 Books cat. 2 Books cat. 3 Books cat. 4 Books cat. 5 Books cat. 6 Books cat. 7 (Ref.)
Student backgr. Student’s sex Parents‘ origin Parents no sec. ed.
Tenth grade Student’s age
Nineth grade (Ref.)
Grade level Eighth grade
Reading score
0.50 (0.50) 0.20 (0.40) 0.01 (0.12) 0.09 (0.29) 0.52 (0.50) 0.38 (0.48) 0.01 (0.11) 0.06 (0.25) 0.20 (0.40) 0.23 (0.42) 0.21 (0.41) 0.15 (0.36) 0.12 (0.33)
0.15 (0.36) 0.61 (0.49) 0.24 (0.42) 188.44 (3.37)
490.85 (102.77)
0.49 (0.50) 0.03 (0.18) 0.09 (0.29) 0.10 (0.30) 0.41 (0.49) 0.40 (0.49) 0.01 (0.08) 0.07 (0.25) 0.23 (0.42) 0.24 (0.43) 0.25 (0.43) 0.14 (0.35) 0.06 (0.25)
0.11 (0.31) 0.89 (0.31) 0(0) 187.56 (3.42)
545.15 (87.27)
0.50 (0.50) 0.14 (0.35) 0.02 (0.13) 0.07 (0.25) 0.47 (0.50) 0.45 (0.50) 0.01 (0.10) 0.05 (0.21) 0.18 (0.39) 0.23 (0.42) 0.22 (0.41) 0.17 (0.38) 0.15 (0.35)
0.13 (0.33) 0.63 (0.48) 0.24 (0.43) 188.43 (3.40)
507.80 (95.70)
0.48 (0.50) 0.03 (0.16) 0.11 (0.31) 0.11 (0.32) 0.41 (0.49) 0.37 (0.48) 0.01 (0.08) 0.07 (0.25) 0.24 (0.43) 0.24 (0.43) 0.24 (0.43) 0.14 (0.35) 0.06 (0.24)
0.11 (0.31) 0.89 (0.31) 0(0) 187.55 (3.42)
546.05 (87.14)
FIN
GER
GER
FIN
Without imp. val.
With imp. values
Table A1 Weighted means and (standard deviations) for reading
0 0 0 0 0 0 0 0 0 0
0 for female 0 0
0 for other grade 182
0 for other grade
0 for other grade
206.93
Min
1 for fin. lower second. 1 for fin.upper second. 1 for fin. univers. 1 1 1 1 1 1 1
1 for male 1 if parent foreign 1 for less than sec.
1 for tenth grade 194
1 for nineth grade
1 for eighth grade
887.31
Max
No books at students home 1–10 11–50 51–100 101–250 251–500 More than 500 books
Sex of students Parents’ place of birth Highest educational level reached by a parent
Grade level of students Grade level of students Grade level of students Student’s age in months (−181 in regression)
Warm estimate of reading test score
Description
260 A. Ammermueller
Instruction time % of high educated teachers No lack of material Institutions Private school Standardized tests Selection of students Budget (category variable) School types Vocational school Low sec. school Medium sec. school Highest sec. school Comprehensive school (Ref.) School type n.a.
Resources Teacher/student ratio
Table A1 continued
51.30 (0) 0.88 (0.19) 0.92 (0.27) 0.03 (0.17) 0.26 (0.44) 0.20 (0.40) 1.55 (0.52)
0 0 0 0 1 0
0.92 (0.28)
0.04 (0.20) 0.03 (0.18) 0.66 (0.47) 1.07 (0.39)
0.05 (0.22) 0.19 (0.39) 0.26 (0.44) 0.29 (0.45) 0.17 (0.38)
0.03 (0.17)
0.09 (0.02)
54.55 (4.18) 0.78 (0.29)
0.06 (0.02)
0
0.03 (0.18) 0.19 (0.39) 0.29 (0.45) 0.35 (0.48) 0.14 (0.35)
0.04 (0.19) 0.03 (0.17) 0.59 (0.49) 1.08 (0.40)
0.93 (0.26)
54.63 (4.40) 0.81 (0.29)
0.06 (0.02)
0
0 0 0 0 1
0.03 (0.18) 0.26 (0.44) 0.19 (0.40) 1.56 (0.53)
0.90 (0.29)
51.30 (0) 0.88 (0.20)
0.09 (0.01)
FIN
GER
GER
FIN
Without imp. val.
With imp. values
0
0 0 0 0 0
0 if school is public 0 0 if school may not select 0
0
42.12 0
0.02
Min
1
1 1 1 1 1
1 1 1 2
1
87.75 1
0.19
Max
No information on school type
Vocational school (Berufssch.) Low sec. school (Hauptschule) Medium sec. school (Realschule) Highest sec. school (Gymnasium) Comprehensive school (Gesamtsch.)
School type Standardized tests more than once a year School has right to select its students School’s right over budget allocation and formulation
Teachers per students at school level Minutes per year/1000 % of teachers with highest degree School lacks no material
Description
Difference in PISA between Finland and Germany 261
262
A. Ammermueller
Table A2 Weighted means (st. dev.) of selected variables by school type for Germany School type
Students (in %)
Reading score Reading
Parents tert. educ.
% of high educ. teachers
Private school
Teacher/ student ratio
All Vocational Low second. Medium second. High second. Comprehensive School type n.a.
4,917 (100) 116 (2.4) 931 (18.9) 1,235 (25.1) 1,713 (34.8) 885 (18.0) 37 (0.8)
490.85 (102.77) 476.15 (72.71) 405.96 (79.86) 498.19 (73.10) 577.51 (68.84) 467.02 (84.81) 287.77 (70.65)
0.38 (0.48) 0.28 (0.45) 0.18 (0.39) 0.34 (0.47) 0.61 (0.49) 0.32 (0.47) 0.21 (0.41)
0.78 (0.29) 0.77 (0.28) 0.54 (0.31) 0.81 (0.27) 0.97 (0.10) 0.79 (0.27) 0.28 (0.17)
0.04 (0.20) 0 (0) 0 (0) 0.05 (0.22) 0.09 (0.29) 0 (0) 0 (0)
0.06 (0.02) 0.05 (0.01) 0.06 (0.01) 0.06 (0.03) 0.06 (0.01) 0.06 (0.01) 0.10 (0.03)
Standard deviations are reported in parentheses. Only in first column the percentage of students are reported in parentheses .004
.006
.003
Density
Density
.004 .002
.002 .001
0
0 200
400
600
800
1000
200
400
Characteristics Effect FIN(1)
GER
800
GER(1)
.004
1000
GER(2)
.01
.008
Density
.003
Density
600 Return Effect
.002
.006
.004
.001 .002
0
0 200
400
600
800
1000
-200
Residual Effect GER(2)
-100
0
100
200
Interaction effect GER
Fig. A1 Real and hypothetical test score distributions
FIN-FIN(1)
GER(1)-GER
2.48 (10.73) 2.12 (3.83) −7.05 (4.35) −6.25 (3.88)
4.91 (9.00) −15.92 (12.44) 11.10*** (4.21) −1.04 (4.33) −24.13*** (8.08) −33.51** (6.85)
School types Voc. school Low sec. school – –
−96.69 (122.58) – −8.61 (9.24) 6.51 (8.77)
−44.94*** (2.50) −32.07*** (7.30) −26.36*** (4.13) −30.45*** (4.36) −10.90*** (3.07) −68.11*** (17.70) −49.67*** (7.60) −35.60*** (5.86) −29.39*** (5.62) −9.32 (5.75) −0.30 (5.80)
−15.67 (86.54) −0.52 (0.47) 7.77 (8.70) 6.62 (6.18)
−10.34*** (2.54) −13.90*** (3.62) −34.78*** (9.45) −21.39*** (4.92) −1.85 (2.80) −64.71*** (10.98) −57.70*** (8.10) −28.80*** (4.49) −18.94*** (4.24) −11.05*** (4.00) −8.39** (4.10)
Student background Student’s sex Parents‘ origin Parents no sec. ed. Secondary ed. 2 Secondary ed. 3 Books cat. 1 Books cat. 2 Books cat. 3 Books cat. 4 Books cat. 5 Books cat. 6
−47.68*** (4.93) −0.76** (0.33)
Resources Teacher/student ratio Instruction time % of high educated teachers No lack of material Institutions Private school Standardized tests Selection of students Budget (category variable)
−37.75*** (3.83) 48.70*** (3.42) −2.65*** (0.41)
Grade level Eighth grade Tenth grade Student’s age
– –
−4.89 (12.76) 4.38 (3.72) 2.39 (4.95) −10.83*** (3.88)
−3.39 (12.89) −15.68 (12.10) 18.08*** (5.00) −1.57 (5.58) −45.83*** (14.91) −33.24*** (8.77)
−51.97 (131.81) – 2.80 (9.83) −2.14 (6.73)
−45.88*** (2.89) −40.80*** (9.99) −30.22*** (4.70) −34.05*** (4.74) −12.53*** (3.37) −59.26*** (14.99) −50.64*** (10.12) −32.33*** (7.40) −27.33*** (7.41) −5.17 (7.21) 0.70 (7.71)
−42.80*** (5.79) – −0.46 (0.41)
47.14 (107.03) .10 (0.50) 10.50 (10.98) −0.95 (7.59)
−14.68*** (3.06) −13.27*** (4.51) −33.75*** (11.61) −23.90*** (6.21) −2.45 (2.87) −63.96*** (15.33) −50.55*** (6.96) −27.74*** (5.31) −18.87*** (5.23) −13.60*** (4.82) −8.69* (4.79)
−43.26*** (4.44) 48.76*** (4.20) −2.58*** (0.50)
FIN
GER
GER
FIN
Without imputed Values
With imputed values
Table A3 Coefficients (standard errors) of weighted survey regressions
– –
8.77 10.86 8.79 8.24
– –
0 0 3.36 0.66
10.61 0 14.56 1.11
1.36
1.87
14.46 14.79 8.24 8.56
0 1.40 10.24
0 – 0
FIN
0 2.01 16.11
0 0 0
GER
Percentage of missing values
Difference in PISA between Finland and Germany 263
27.99*** (5.91) 85.31*** (5.83) −146.91*** (10.39) 525.79*** (27.53) 4917 0.5566 224.85
– – – 627.78*** (16.11) 4855 0.1747 42.01
18.60*** (7.52) 74.39*** (7.23) – 502.18*** (31.47) 3080 0.5074 67.97
– – – 628.83*** (15.53) 3407 0.1800 32.26
FIN
GER
GER
FIN
Without imputed Values
With imputed values
P-values: *** 1%, ** 5%, * 10%. Cluster robust standard errors are reported in parentheses
Medium sec. school Higher sec. school School type n.a. Intercept Number of observations R-squared F-test
Table A3 continued
– – –
GER – – –
FIN
Percentage of missing values
264 A. Ammermueller
265
Difference in PISA between Finland and Germany Table A4 Results of decomposition for significantly different coefficients
Total gap Characteristic effect Return effect Interaction effect
Sum
Grade
St. Backgr.
Resour.
Institut.
Schools
Cons
54.29 −40.60 56.07 38.83
−6.38 6.85 8.31
−1.14 −25.80 4.54
0.56 0 0
−7.67 0 0
−25.98 −25.98 25.98
101.00
Results of Oaxaca–Blinder decomposition of gap in reading test scores using only coefficients that differ at the ten percent-level between Finland and Germany
References Adams R, Wu M (2002) PISA 2000 technical report. OECD, Paris Ammermueller A, Heijke H, Woessmann L (2005) Schooling quality in Eastern Europe: Educational production during transition. Econ Educat Rev 24:579–599 Artelt C, Schiefele U, Schneider W, Stanat P (2002) Leseleistungen deutscher Schülerinnen und Schüler im internationalen Vergleich (PISA). Z Erziehungswissenschaften 5:6–27 Baumert J, Deutsches PISA-Konsortium (2001) PISA 2000: Basiskompetenzen von Schülerinnen und Schülern im internationalen Vergleich. Leske und Budrich, Opladen Bishop J, Woessmann L (2004) Institutional effects in a simple model of educational production. Educat Econ 12:17–38 Blau F, Kahn L (1992) The gender earnings gap: learning from international comparisons. In: American Economic Review, Papers and Proceedings of the Hundred and Fourth Annual Meeting of the American Economic Association 82:533–538 Blinder A (1973) Wage discrimination: Reduced form and structural estimates. J Human Resour 8:436–455 Eurybase (2003) The information network on education in Europe. http://www.eurydice.org, visited October 24th 2003 Fertig M (2003) Who’s to blame? The determinants of German student’s achievement in the PISA 2000 study. IZA Discussion Paper No. 739, Bonn Garcia J, Hernández P, López-Nicolás A (2001) How wide is the gap? An investigation of gender wage differences using quantile regression. Empirical Econ 26:149–167 Hambleton R, Swaminathan H (1989) Item response theory. Principles and applications. Kluwer, Boston Hanushek E, Luque J (2003) Efficiency and equity in schools around the world. Econ Educat Rev 22:481–502 Jann, B (2005) Standard errors for the Blinder–Oaxaca decomposition. http://repec.org/dsug2005/ oaxaca_se_handout.pdf Juhn C, Murphy K, Pierce B (1993) Wage inequality and the rise in returns to skill. J Polit Econ 101:410–442 Lauer C (2000) Gender wage gap in West Germany: How far do gender differences in human capital matter? ZEW Discussion Paper No. 00–07, Mannheim Oaxaca R (1973) Male-female wage differentials in urban labor markets. Int Econ Rev 14:693–709 OECD (2001) Knowledge and skills for life: First results from PISA 2000. OECD, Paris Silverman B (1986) Density estimation for statistics and data analysis. Chapman& Hall, London Todd P, Wolpin K (2003) On the specification and estimation of the production function for cognitive achievement. Econ J 113:F3–F33 Välijärvi J, Linnakylä P, Kupari P, Reinikainen P, Arffman I (2002) The Finnish success in PISA – and some reasons behind it. http://www.jyu.fi/ktl/pisa/publicationl.pdf West M, Woessmann L (2006) Which school systems sort weaker students into smaller classes? International Evidence. European J Polit Econ (in press) Woessmann L (2003) Schooling resources, educational institutions, and student performance: The international evidence. Oxford Bull Econ Statist 65:117–170
The impact of unionization on the incidence of and sources of payment for training in Canada David A. Green · Thomas Lemieux
Accepted: 18 August 2006 / Published online: 25 September 2006 © Springer Verlag 2006
Abstract This paper uses the Adult Education and Training Survey (AETS) to look at the effect of unions on the incidence and sources of payment for training in Canada. Simple tabulations indicate that union workers are more likely to engage in training activities than nonunion workers. The higher incidence of training among union workers is driven by the fact that they are more likely to take training courses offered by their employers than nonunion workers. This suggests that union workers are more likely to participate in training activities that enhance their firm-specific human capital. This union effect disappears, however, once we control for a variety of factors such as age, education, and in particular, firm size and seniority. Everything else being equal, unions have little effect on the provision of training in Canada. Finally, we present some limited evidence that unions help increase the participation of firms in the financing of training activities. Keywords Unions · Training · Human capital 1 Introduction A very large literature has clearly established that unions tend to raise wages in decentralized labour markets like Canada, the United States, or the United Kingdom. Much remains to be learned, however, about the effect of unions on many other important economic outcomes. In particular, if union presence encourages investments in human capital and training, we would expect
D. A. Green · T. Lemieux (B) Department of Economics, University of British Columbia, #997-1873 East Mall, Vancouver, BC, Canada V6T 1Z1 e-mail:
[email protected]
268
D. A. Green, T. Lemieux
union workers to be more productive, which would in turn account for part of the union wage gap. Under this scenario, unionism as an institution would be making a positive contribution to the overall skill level of the workforce. Unfortunately, there is little direct evidence on this for Canada. In this paper, we use the Canadian Adult Education and Training Survey (AETS) to answer basic questions about the relationship of unionization to training levels and the sources of payment for training. Any attempt to study training impacts must start with the important distinction between general and firm specific human capital. There are good reasons to believe that unions will have different impacts on both the levels and funding for the two different types of human capital. For example, unions have been shown to be associated with more stable (i.e., longer tenure) jobs. This could lead to more firm specific investment because firms and workers both believe the relationship will last longer and therefore be willing to invest more into it in the union sector. On the other hand, to the extent that workers invest in general human capital to improve their outside option value should the current job end, greater job stability in the union sector could lead to lower investment in this type of training. In recent years, notions of what constitutes general and specific human capital and who pays for the investment in each have been refined considerably. It is still the case, though, that union impacts are often theoretically ambiguous, implying the need for an empirical investigation. One important advantage of the AETS in this regard is that it contains detailed information on the type and source of payment for training. This enables us to look separately at the effect of unions on general and specific human capital. In this paper, we will first set out different theories of human capital investment and discuss the role of unions within them. We then turn to the AETS to examine the implications of the various theories. We do this first using simple tabulations and then using econometric techniques to control for the impacts of other worker characteristics the effects of which might otherwise be assigned to unionization. We find that results differ slightly for males and females. For both males and females, unions appear to have only small effects on the amount of either firm specific or general human capital investment. There is some evidence that unions do alter the extent to which firms take part in the investments for males, but little evidence of this for females.
2 Previous literature and theoretical considerations 2.1 Theoretical considerations Any examination of the impact of unions on training must start with the key distinction between general and firm specific human capital. As is well known (Becker 1964), in a perfectly competitive labour market, workers pay (in terms of lower wages) for all investments in general human capital, since this type of human capital is equally valued in all firms. In contrast firm specific capital is valuable only at a particular firm. This implies that firms can invest in this type
Unionization and training in Canada
269
of human capital without fearing that the trained worker will be bid away by another firm. Several papers have pointed out, however, that the sharp predictions of the human capital approach about who should pay for training does not always hold in practice. For instance, Loewenstein and Spletzer (1998) found that training that appears to be general in nature is not infrequently paid for by the employer. This has led to the development of several models in which firms rationally invest in general as well as specific human capital. Loewenstein and Spletzer (1998) develop a model in which a wage contract is specified with a minimum wage guarantee for future periods. In their model, if the worker’s wage option outside the firm is less than the wage guarantee, then a small increase in productivity from a general human capital investment does not need to be matched with a wage increase to keep the worker attached to the firm. This means the firm captures the full return to the investment in either specific or general human capital. A strongly related idea is presented in Acemoglu and Pischke (1999), in which they argue that if there is wage compression, in the sense that the wage that must be paid to keep a worker at a firm rises less rapidly with training than does productivity, then there is again an incentive for firms to invest in general human capital. They discuss several possible sources for such wage compression, including union effects. More generally, Stevens (1994) points out that imperfect competition among firms, or any other imperfections that induce uncertainty in turnover and wages being paid below marginal products, will tend to induce firms to invest in the general human capital of workers. With these concepts in mind, we now turn to the impact of unions on training. More specifically, we look at the implication for training of two well established effects of unions: (1) unions establish higher wage guarantee levels or, in the extreme, make the guarantee credible and therefore feasible; and (2) unions increase worker attachment to the firm. This first effect of unions has been well documented in multiple empirical investigations of union impacts that show that pay related outcomes are quite different in union versus nonunion firms. The best recorded evidence is in average pay levels, with workers in union firms earning an average of approximately 10–15% more than comparable nonunion workers. Unions also are well documented to be associated with reduced wage differentials by education level, job tenure and gender. Thus, the wage–tenure profiles at union firms are higher but flatter than at nonunion firms. The second effect of unions has also been well documented in the literature that systematically shows that union workers tend to have longer job tenure than nonunion workers. Freeman and Medoff (1984) argue that unions increase tenure because they provide workers with a “voice” to correct perceived difficulties in the workplace. Without unions, individual workers may find they have little ability to induce change at work and thus choose to “exit” the firm when they face difficulties. We start by discussing the effect of higher union wage guarantees on training. As it turns out, whether or not wage guarantees increase training critically depends on how wage guarantees affect the whole wage–tenure profile. As is well known, by setting high first period wages (i.e., flattening the wage–tenure
270
D. A. Green, T. Lemieux
profile by raising entry wages), unions may preclude credit constrained workers from being able to finance (through lower wages) investments in either general or firm specific human capital. Mincer (1983), among others, argues this is a plausible union effect. However, to the extent that unions also set wages high enough to have binding wage guarantees in the second period, there will not be a reduction in investment with the introduction of unions. Instead, having a union wage structure will just imply that general human capital investment, and specific human capital investment, will be entirely funded by the employer, as in the model of Loewenstein and Spletzer (1998). The main empirical implication of union wage guarantees is thus that more of the funding for both general and specific human capital investment of union workers should be supplied by employers. In contrast, the effect of union wage guarantees on general or specific human capital investments is ambiguous. The second effect of unions, namely that they strengthen the attachment of workers to firms, implies that unionized firms should be more willing to invest in the human capital of their workers. This implication holds both for specific human capital and, in the scenarios where firms invest in general human capital, general human capital as well. Quite simply, in any scenario in which firms are willing to invest in human capital, they will be more likely to make such an investment the greater the probability that the trainees will remain with the firm for long enough for the firm to earn a return. Thus, the greater stability engendered by unions should imply more investment. Note that these two effects of unions are linked since the higher wage guarantees in the future reduces turnover. Booth and Chatterji (1998) point out that by preventing monopolistic firms from cutting future wages, unions reduce turnover and thus generate more training. It is also interesting to explore how robust are the various predictions about the effect of union on training in more general models of human capital accumulation and union behaviour. Kuhn and Sweetman (1999) propose a more general model of human capital accumulation where they divide general human capital into general human capital that is useful both inside the current firm and in other firms, and general human capital that is useful only at other firms.1 Individuals may initially invest in skills of various types before they know what firm they will be associated with. Once they join a particular firm, they will likely invest further in some of those skills but let others, which are not directly relevant for their current firm, atrophy. Kuhn and Sweetman argue that workers in firms with more turnover will be more likely to invest in the general human capital that is not directly relevant at the current firm in order to keep their options open. Since unions are associated with more stable employment relationships,
1 One example of this alternative form of human capital is industry-specific human capital that is useful in an industry other than that of the current firm. Neal (1995) and Parent (2000) show evidence that this is an important form of human capital in practice.
Unionization and training in Canada
271
union workers will be more likely to under-invest in general human capital that is not relevant at the firm.2 Another extension is to relax the assumption that unions behave myopically by bargaining on wage profiles and work conditions, without taking account of unintended side-effects on training outcomes. Acemoglu and Pischke (1999) consider union impacts on training in a model that is in the spirit of a union monopoly model of wages and employment. In union monopoly models, unions set wages while firms determine employment based on the contract wage, but unions take account of how firms will respond when choosing their preferred wage. In their model, Acemoglu and Pischke consider a union choosing its preferred wage profile in part considering the effect of that profile on firm investment. Acemoglu and Pischke essentially assume that the union wage is above the outside option in all periods. In that situation, unions have an incentive to institute a flat wage profile because, as described earlier, such a profile induces firms to invest in general human capital. Thus, their model provides a rationale for unionized firms having flattened wage profiles. Given that unions raise average wages above those available at nonunion firms, their model also implies that unionized firms will invest more in general human capital than will nonunion firms. Weiss (1985) also looks at the impact of unions on training in the context of a union model in which senior union members control union decision making. In Weiss’s model, senior union members are able to extract a transfer from junior members. When the transfer cannot exceed some maximum size, Weiss shows that it is optimal for the senior members to establish a contract that requires the junior members to train, which effectively limits the amount of labour they actually supply. Barron et al. (1987), however, show that this finding critically depends on the assumption that there is an upper bound on transfers from junior to senior workers. Using the alternative assumption that there is a lower bound on the net wage for new hires (i.e., their wage taking into account lost time due to training and the transfer to senior union members), Barron et al. show that it is then optimal for senior workers to under-train junior members. In this setting, whether unions over-train or under-train depends crucially on the nature of the restrictions faced by senior members in maximizing the transfers from junior members. Finally, Kennedy et al. (1994) argue that unions may have a negative impact on training through an alternative route. Specifically, strict union rules about job content and assignment may imply that firms have less incentive to train workers for anything other than a very narrowly defined task. Whether this would imply lower training levels is not clear, though it would almost certainly imply a reduction in efficiency. 2 Kuhn and Sweetman find that post-displacement wages actually decline with the length of job tenure on the pre-displacement job for union workers, but that post-displacement wages rise with pre-displacement tenure for nonunion workers. They interpret this finding as evidence in favour of their hypothesis that union workers stop investing in general human capital relevant in outside firms, while nonunion workers continue to make such investments.
272
D. A. Green, T. Lemieux
While it is difficult to draw robust empirical implications from these various models, three main messages nonetheless emerge from the existing literature. First, unions should unambiguously increase investment in firm specific human capital because of union effects on worker stability. Earlier empirical results also suggest that most of this investment is borne by the firm. Second, predictions about the effect of unions on general human capital investment are quite ambiguous and depend on the specifics of the different models. It is thus not that surprising that empirical studies on the effect of unions on training have been relatively inconclusive (see below). Third, if anything, the presence of unions will make it more likely that the firm as opposed to the individual pays for general training. 2.2 Empirical results The empirical papers on the impact of unions on training provide quite mixed results. The first studies of union impacts on training by Duncan and Stafford (1980) and Mincer (1983) used US data. For example, Mincer found that older (48–64 year old) union workers who do not change union status between years (union stayers) had significantly less training than older nonunion workers who did not change status (nonunion stayers). Older workers who moved from a nonunion to a union job (union joiners) also had less training on the union job than older workers who stayed in their nonunion jobs, while the results for younger workers were not significant. Note, however, that conclusions reached from the 1969 to 1971 National Longitudinal Survey data used by Mincer must be interpreted with caution since the question used confuses investment in with use of human capital.3 Barron et al. (1987) also find a negative effect of unions on training. In their case, the data is from a survey of employers who are asked about how much and how they provide training to new workers. The questions appear to be geared toward firm specific human capital investment, asking about training provided by specially trained personnel, co-workers and by the employee watching others work. Barron et al. (1987) find that the proportion of the firm’s non-supervisory workers covered by collective bargaining is statistically significantly negatively related to measures of management provided training, worker provided training, and total training. On the opposite side, Lynch (1992) uses data from the National Longitudinal Survey of Youth (NLSY) for 1980 and 1983 to show a positive effect of unions on training. The NLSY training question is closer to the AETS questions we use below, asking “In addition to your schooling, military and governmentsponsored training programs, did you receive any other types of training for more than 1 month?” It also asks where the individual received the training. Lynch finds that union membership has an insignificant impact on training 3 The question used is “Do you receive or use additional training (other than school training) on your job?”.
Unionization and training in Canada
273
received outside the firm, but positive and strongly significant impacts on training received on the job site and on apprenticeship training. Similarly, using a survey of Australian firms, Kennedy et al. (1994) find that firms where unions are actively involved in bargaining have significantly more training. The authors argue that the distinction between mere union coverage and active unions is crucial in the Australian context, showing that a union density variable does not have statistically significant effects but a measure of union activity does. The evidence for Britain is also generally supportive of positive effects of unions on training. Green (1993) investigates the inter-relationships among training, firm size and unionization. Green’s main finding is that unions have significant positive effects on training in small firms but virtually zero effects in large firms. This is an important result because it is often difficult to separate union effects from the effects of more formal complaint and wage processes instituted in larger firms. More recently, Green et al. (1999), Booth et al. (2003), and Booth and Böheim (2004) have all found positive effects of training in Britain using a variety of data sets and estimation techniques. Dustmann and Schönberg (2004) also find positive effects of unions on training in Germany. They also provide compelling evidence that union wage compressing effects, as opposed to other factors, are the source of the positive link between unions and training. 3 Data Our main investigation is based on the AETS for 1997. The AETS is a special survey attached to the Labour Force Survey (LFS) which contains both the LFS questions on basic personal characteristics such as age, gender, education level and job tenure and an extended battery of questions on training in the previous calendar year. The AETS is not a perfectly random sample of the Canadian population, and we use the weights provided with the survey in all our calculations. We make several sample cuts to obtain a sample tailored to the issues we are investigating. We are primarily interested in investments in training and education that are related to work after individuals have finished their main formal schooling. For that reason, we omit individuals who are full time students or over age 65 at the time of the survey, and individuals who did not work during the sample year.4 Because we are interested in how union status affects investments in and by employees, we also cut out individuals who are self-employed on their main job at the time of the survey. The original AETS sample contains 41,645 individuals. Our sample cuts result in a sample size of 18,033 observations. 4 We excluded all those taking schooling full time since we wanted to focus on job related and funded training. Since we are studying union impacts it seemed necessary to focus on training while employed otherwise we would end up lumping all those who were not employed (or at school full time) into the nonunion category. Note also that though we are only looking at individuals who were employed at some time during the previous year, we do not restrict our analysis of training to training episodes undertaken while the individual was working.
274
D. A. Green, T. Lemieux
The AETS contains information on up to five education or training spells in each of three categories: programmes, courses, and hobbies. The ordering of the training questions in the questionnaire is important to keep in mind in attempting to understand the content of these three categories. Individuals are first asked, “At any time during 1997, did you receive any training or education including courses, private lessons, correspondence courses (written or electronic), workshops, apprenticeship training, arts, crafts, recreation courses, or any other training or education?” Conditional on answering yes to this first question, respondents are then asked if the training was intended to obtain a high school diploma, a formal apprenticeship certificate, a trade or vocational diploma or certificate, a college diploma or certificate, or a university degree, diploma or certificate. An answer of yes to any of these questions initiates a series of additional questions related to what are called “programmes”. Programmes thus consist of training or education spells aimed at obtaining a formal certificate. Whether or not the respondents answer yes to taking programme training, they are then asked whether they took any other courses. Finally, respondents are asked if they took any hobby type courses. Our focus in this paper is on work related training. For that reason, we do not count hobby spells as training. For both programmes and courses, respondents are asked the main reason for taking the training, with possible answers being: (1) a current or future job; (2) personal interest; (3) other. We select only programme and course spells for which the respondent answered that the main reason for taking the training was the current or future job. Thus, for individuals who have only hobby spells and/or only programmes or courses done for personal interest, we keep the observation but treat it as if there was no training spell. Even after doing this, there are a considerable number of observations for whom we observe both work related programme and course training spells and/or multiple programme or course training spells. We view programme and course spells as potentially quite different, with the first being more like going back for more formal schooling and invariably being associated with obtaining a formal ratifying document of some kind, while the latter may contain a variety of types of work related courses. Indeed, we will argue that programmes can be viewed as relating purely to general human capital formation while at least some courses may be related to firm specific human capital formation. Given this perspective, we elected to keep information on programme and course training separately for each respondent. In order to simplify the exposition, we focus on only one course and/or programme per person. For an individual with multiple course spells, we select only the spell with the longest duration and similarly select the longest duration programme spell for individuals with multiple programme spells. For individuals with both multiple programme and course spells, we record the longest of each.5 5 This choice has little consequence for training programmes since only 3% of individuals taking a
programme took more than one programme during the year. The programme of the longest duration represents 99% of the total duration for all programmes taken. It is more common, however, for the same person to take more than one training course during the same year. Thirty-five percent
Unionization and training in Canada
275
The discussion in the previous section pointed out that there are good theoretical reasons for anticipating different impacts of unions on specific versus general human capital. More importantly, the distinction between general and firm specific human capital becomes blurred once we introduce frictions between wages paid in the current firm and those offered at outside firms. Thus, a more useful distinction is one between investments in human capital that are easily verified by alternative employers versus ones that are only directly observable by the worker and his or her current employer. The former can provoke offers from alternative employers attempting to poach the investment while the latter cannot. This is a somewhat different distinction from the traditional technologically driven distinction between skills that are useful only with the current firm’s technology versus skills that are useful with in the production functions of other firms. With the distinction based on observability in mind, we examine two different schemes for classifying the training spells we are studying into general versus firm specific human capital categories. As stated earlier, we view programme spells as being clearly related to general human capital. In these spells, individuals work toward formal qualifications which by their very nature signal to prospective employers throughout the economy that the individual has acquired a set of skills. Indeed, the point of this type of education is often to prepare individuals for productive work in general not for work at a specific firm. Thus, all the schemes we examined share the feature that all programme spells are always classified as general human capital. This means that the definitional issue comes down to whether and how to classify course spells. The first, and simplest, classification scheme we use for the course spells is to define all course spells as being related to firm specific human capital. While this is clearly an exaggeration, we believe that the simple association of programme spells with general human capital and course spells with firm specific human capital is the most robust approach for portraying the direction, if not the exact magnitude, of the relationship between unionization and the different types of human capital generation. We also use an alternative classification scheme based on who actually provides the training.6 The survey asks questions about who provided the training, with possible answers including an educational institution, a private educational or training institution, and the place of work. Under this alternative classification, we assume that training taken at work is specific to the current firm and is not easily observable to alternative firms. Indeed, if the training was intended to generate general skills, it is unlikely that it would be efficient for the course to be provided by the employer since public or private educational institutions would have a comparative advantage of people who took training courses took more than one course in the same year. Nonetheless, the longest course accounts for 85% of total hours of course training. So even in the case of training courses, we lose little information by focusing on the longest spell of training. 6 We also explored alternative definitions based on stated reasons for why individuals took courses (e.g. obtaining formal qualifications or upgrading skills for the current job). Unfortunately, we could not use these alternative classifications in practice because of shortcomings in the way the questions were asked and answered.
276
D. A. Green, T. Lemieux
Table 1 Basic tabulations of training rates Outcome
All nonunion
All union
Males nonunion
Males union
Females nonunion
Females union
Training Programme training Course training Both prog. and course General training Firm spec. training
0.28 0.097 0.20 0.016 0.22 0.062
0.32 0.076 0.26 0.018 0.21 0.11
0.28 0.097 0.20 0.018 0.22 0.062
0.29 0.066 0.23 0.012 0.19 0.11
0.28 0.098 0.20 0.014 0.23 0.061
0.36 0.089 0.30 0.026 0.25 0.12
in providing such training. Thus, all course training provided by the employer is classified as specific training and all other course training plus programme training is defined as general training. This definition fits with standard classification schemes in other papers where training spells are separated into those done on the job versus those done off the job. One caveat to keep in mind, however, is that on-the-job training is not measured in the AETS. Because of this, we may be greatly understating the true importance of specific training. 4 Descriptive statistics As a first step in characterizing the data, we would like to establish whether unions are associated with different levels of training of any kind. In all the work that follows we use a union dummy variable that equals one if the individual was a member of a union or was covered by a collective agreement on their main job during the previous year. Table 1 provides basic tabulations on whether an individual received training in the previous year (we do not limit the sample to individuals working while taking training). The first row corresponds to whether an individual receives training of any work related type while employed in the previous year, broken down by union status and gender. The first two columns reveal that, overall, union workers are only 4% points more likely to train than nonunion workers. However, this small union effect hides noticeable differences within subgroups. While union and nonunion males are equally likely to train, unionized females are 8% points more likely to train than nonunion females. The second and third rows of Table 1 contain results relating to our first, simplest definition of general and firm specific human capital: general human capital is equated with programme training while specific human capital is equated with courses. Differences between union and nonunion workers are much sharper when one looks at these subcategories. Thus, for all workers pooled together, union workers are 2% points less likely to get programme training but 6% points more likely to get course training. The direction of these differences holds up in the specific gender groups, with females showing the largest difference in specific training. For females, union workers are 10% points more likely to get course training than nonunion female workers.
277
Unionization and training in Canada Table 2 Training types and payment sources Payer
Employer Self Government Union Shared
Programme training
Course training
General training
Firm-specific training
Nonunion
Union
Nonunion
Union
Nonunion
Union
Nonunion
Union
0.42 0.67 0.12 0.006 0.18
0.48 0.65 0.13 0.031 0.22
0.88 0.16 0.043 0.026 0.075
0.90 0.16 0.076 0.045 0.090
0.67 0.40 0.081 0.019 0.13
0.73 0.36 0.097 0.053 0.15
0.99 0.047 0.021 0.018 0.041
0.97 0.076 0.076 0.021 0.069
Note All proportions are computed for men and women pooled together. The “shared” category consists of training jointly funded by the employer and the worker. “General training” refers to either programme training or course based training that is not provided directly by the employer. “Firm specific training” refers to course based training that is provided directly by the employer
The last two rows of the table present results relating to our alternative definition of general and specific human capital investment, in which specific training is defined as only course training that is directly provided by the employer. By this definition as well, union workers get more specific human capital training than their nonunion counterparts. The general training again favours nonunion workers for males but union females are now more likely to train than their nonunion counterparts. Thus, for males the patterns fit with a model in which union firms are willing to invest more in specific human capital because of added worker stability, but this is partially offset by reduced investment in general human capital. For females there is no such trade-off using the second definition of specific human capital: union and nonunion workers receive very similar levels of general training but union workers get more firm specific training. The same trade-off between general and specific human capital investment witnessed for males is seen for females if we use the first definition of specific and general human capital. As discussed in the previous section, considerable attention has been paid to the issue of who actually pays for training. Table 2 contains a breakdown of the source of payment by training type, again separated by union status for men and women pooled together (the results are quite similar for men and women analysed separately). The numbers in the table correspond to the proportion of trainees of a particular type who state that some or all of the training was funded by a given source. Note that respondents are able to list multiple funding sources so there is no reason to expect the reported proportions to sum to one. While it is hard to be certain, the wording of the funding questions in the survey point toward individuals interpreting this as direct payment for training as opposed to indirect payments through accepting lower wages on the part of the workers or paying wages above marginal product on the part of the firm. This presumably understates the participation of workers in the funding of training. For programme training, which we argue is one way one might define general human capital investment, the patterns fit broadly with the discussion in
278
D. A. Green, T. Lemieux
the previous section. In particular, the majority of the direct payment for this training is made by some combination of individuals and the government. This fits with the traditional view that general human capital investment should be done by either the worker or by society. However, as in earlier studies, we find evidence of a substantial amount of investment by employers as well. For course training, close to 90% of the funding is carried out in whole or in part by employers. Financing by the worker is much smaller than with programme training and in close to half the cases where there is investment by the worker, that investment is shared with the firm. The government also plays a much smaller role in this type of investment than in programme training. The payment proportions fit with findings in earlier studies indicating that employers pay for most of firm specific investments on their own. For nonunion workers, for example, 88% of course training involves some firm investment and in 80% of cases, it involves firm investment with no sharing of the investment with the worker. The last two sets of columns regroup training spells according to the alternative definition of specific human capital based on whether the firm provided the training directly. Not surprisingly, employers took part in funding for close to 100% of firm specific training defined in this way. Individuals take only a very limited role in funding this firm specific training, with much of that limited involvement shared with firms. For general human capital defined in this way, employers are actually involved in funding a greater proportion of spells than individuals, with governments playing a smaller, though still substantial role. In terms of union effects, recall the earlier prediction that investment in general human capital should be funded more by the firm and less by the worker in the union versus the nonunion sector. Examining the programme based definition of human capital, there is limited evidence in support of this conclusion. Employers are more likely to take part in funding general human capital in the union sector but workers themselves are investing to the same degree in both sectors. Indeed, we could follow Kuhn and Sweetman in assuming that general human capital can be divided between capital useful in the firm and capital useful outside the current firm (alternative capital). Then, we could define the worker investment in general training useful within the firm as being reflected only in the investments they share with the firm. By that measure, union workers actually invest more in this type of general human capital than do nonunion workers. Staying with these definitions, there are more spells with worker investment but without firm investment in the nonunion than the union sector (the difference between the “self” and “shared” rows is 0.49 vs. 0.43). If we assume such funding reflects investment in alternative human capital (or at least general human capital investment from which firms cannot capture the return), this could correspond to union workers investing less in alternative capital because of greater perceived job stability. Note also that using the broader definition of general training given in columns 5 and 6 yield very similar results. Implications of the models for firm specific investment patterns are unclear. One would generally expect firms to play a large role in such investment and ear-
Unionization and training in Canada
279
lier empirical work suggests that they handle this investment almost exclusively. According to traditional models of firm specific human capital investment, firms may require workers to share in the investment in order to ensure that workers maintain an attachment to the firm and do not walk away with the investment. One might hypothesize that firms will require less investment from workers in situations of greater job stability, such as that engendered by unions. However, Hashimoto (1981) shows that as long as the separation rate is known, there is no necessity in a firm sharing the investment with the worker. Any sharing rule can be optimal. Thus, there is no direct implication for differences in funding sources between the union and nonunion sectors for firm specific training. The results for either the course training or the alternative definition of firm specific training are consistent with these ambiguous predictions. They show no sizable difference between employers and workers involvement in the union and nonunion sectors. All of the discussion to this point has been based upon models in which union impacts on training are indirect. It is also possible that unions affect training investment directly by helping to pay for it themselves or by bargaining for it as part of the collective agreement. This might be a reasonable approach if training was perceived by members as something to which they had insufficient access on their own. The results in Table 2 reveal that unions play a very small direct funding role, taking part in investing in at most about 5% of spells of any type. Impacts through collective agreements are similarly small. The AETS contains a question on whether the training was specified as part of a collective agreement. Only 0.56% of trainees specify that their training was part of a collective agreement. This is in accord with earlier studies that find that unions rarely bargain directly over training.
5 Probit analysis of training incidence The results in Tables 1 and 2 suggest that unions have some impacts both on the incidence of sub-categories of training, and on the overall training levels. In particular, unions appear to slightly reduce the incidence of general training while increasing the incidence of specific training. However, this conclusion is based upon simple tabulations. Union and nonunion workers differ in observable characteristics that are themselves related to training propensity. In this section, we first present tabulations showing union/nonunion differences in individual and firm characteristics and then re-examine union impacts controlling for such differences. Table 3 shows variable means for various personal and firm characteristics, broken down by union status for the whole (men and women pooled) sample. The table reveals that there are substantial differences between union and nonunion workers in many dimensions. In terms of education levels, for example, union workers are less likely to have a high school or less education and more likely to have at least some post-secondary education. The high unionization rate in the public sector in Canada is reflected in the fact that approximately
280
D. A. Green, T. Lemieux
Table 3 Variable means by union coverage status Variable Education Not a high school graduate High school graduate Some post secondary Completed post secondary University Public sector Firm size Less than 20 employees 20–99 employees 100–199 employees 200–499 employees 500 or more employees Female Age 17–19 20–24 25–34 35–44 45–54 55–64 Years of job tenure
Nonunion
Union
0.17 0.24 0.09 0.33 0.17 0.07
0.15 0.18 0.08 0.37 0.22 0.41
0.34 0.21 0.068 0.077 0.31 0.49
0.058 0.12 0.077 0.11 0.64 0.45
0.025 0.12 0.31 0.29 0.19 0.079 5.6
0.008 0.046 0.24 0.32 0.30 0.099 10.1
41% of union workers are employed in the public sector, compared to only 7% of nonunion workers. Union workers are also much less likely to be employed in firms with fewer than 20 employees and much more likely to be employed in firms with over 500 employees than their nonunion counterparts, though this may in part just reflect the public/private sector difference. Union workers are also more likely to be male and tend to be older, with 30% of union workers being of age 45–54 compared to only 19% of nonunion workers. This reflects recent declines in access to unionization among new cohorts of labour market entrants (Beaudry et al. 2001). Finally, the average (interrupted) years of job tenure is substantially higher for union workers, reflecting the higher job stability in the union sector that is at the heart of some of theoretical claims about how unions affect training. Given these substantial differences in observable characteristics, we need to examine union impacts controlling for other covariates to be sure that what is being observed in Table 1 is a true union impact. To do this, we use a probit estimator controlling for various combinations of observable individual and firm characteristics. Because some of the results to this point indicate substantial differences by gender, we present all of our results separately for males and females. Rather than present the estimated probit coefficients, which do not have an interpretable magnitude, the tables report the marginal effects (derivatives, or discrete changes for dummy variables, of the probability of obtaining training with respect to the specified covariates) along with their standard errors.
Unionization and training in Canada
281
Table 4 presents results for males in which the dependent variable is a dummy variable corresponding to overall training (i.e., either programme or course training related to current or future employment). The first column specification contains union status (whether the individual was covered by a collective agreement) and a constant as its only covariates. This demonstrates a union impact of the same order of magnitude as was observed in Table 1: unions increase the incidence of training among males by about 1% point relative to a nonunion average of 28%. The column two specification adds in covariates related to education and age. The education covariates are intended to pick up the extent to which formal schooling alters the costs and benefits of obtaining further education and training. The estimates indicate that more educated workers obtain substantially more and less educated workers obtain less training than those whose highest level of education is high school graduation (the base group). This fits either with formal schooling and further training being complements in production and/or formal schooling reducing the costs of further training, perhaps because those with more schooling have “learned how to learn”. The age variables reveal a strong pattern in which younger individuals have much higher training rates than older individuals, as one would predict in models of rational investment in training. Most importantly from our perspective, adding these variables strengthens a bit the union status impact on training, making it statistically significant. In column 3 we add a variable corresponding to whether the individual had managerial responsibilities to the specification to find out whether managers are more or less likely to get trained. The estimated coefficient indicates that workers with managerial or supervisory responsibilities are substantially more likely to obtain training than those without and adding this variable increases the union impact variable by another percentage point. In the remaining columns of Table 4 we investigate the impacts of sector, firm size, seniority and province. In column 4, we add in a set of dummy variables for public sector employment and firm size as well as years of tenure and years of tenure squared. 7 We add the firm size variables because of results in earlier work showing a correlation between firm size and training incidence, and because of the strong correlation between firm size and union status shown in Table 3. The public sector variable is included to control for the possibility that training is done differently in the public and private sectors and to allow for purer estimates of the firm size effect. Years of tenure are introduced to capture two potential effects. The first effect is that training is expected to take place early on the job for the standard reasons given by human capital theory (maximizing the number of periods for which the training will be productive). This effect is consistent with wage studies that show that the effect of tenure is larger early in the job (concave effect of tenure on wages), suggesting that most productive training takes place early on. The other potential effect is that 7 We use a quadratic specification for years of tenure with the firm by analogy with the wage studies that typically find a positive but declining (negative coefficient on the squared term) effect of tenure on wages.
0.26 (0.042)* 0.080 (0.020)* −0.005 (0.013) −0.024 (0.014) −0.110 (0.017)* − – – – – – – – No 0.056
– – – – – – – – – – – – – No 0.0001
Note 8,074 observations used in the estimation. Standard errors are in parentheses *,+ Mean effect is significantly different from zero at the 5 and 10% significance level, respectively
– – – – – – No 0.066
0.28 (0.041)* 0.088 (0.021)* −0.013 (0.013) −0.039 (0.014)* −0.12 (0.016)* 0.11 (0.012)* –
−0.082 (0.016)* 0.16 (0.024)* 0.13 (0.015)* 0.22 (0.018)*
−0.087 (0.016)* 0.17 (0.024)* 0.14 (0.015)* 0.25 (0.018)*
– − – –
(3) 0.036 (0.011)*
0.007 (0.010)
Union Education No HS degree Post-secondary Post-sec degree University deg. Age 17–19 20–24 35–44 45–54 55–64 Manager Public sector Firm size 1–20 20–99 100–199 200–499 Tenure/10 Ten. Sq./100 Ind. and Prov. Dummies Pseudo R2
(2) 0.021 (0.011)*
(1)
Variable
Table 4 Simple probit results (marginal effects) for training status, males
0.32 (0.042)* 0.12 (0.022)* −0.037 (0.013)* −0.075 (0.015)* −0.14 (0.016)* 0.079 (0.012)* −0.030 (0.020) −0.13 (0.013)* −0.069 (0.014)* −0.041 (0.019)* −0.031 (0.018) 0.044 (0.030) −0.010 (0.015) Yes 0.109
0.30 (0.042)* 0.11 (0.021)* −0.032(0.013)* −0.068(0.015)* −0.13 (0.016)* 0.090 (0.012)* 0.090 (0.016)* −0.13 (0.013)* −.086 (0.013)* −0.051 (0.018)* −0.041 (0.017)* 0.045 (0.030) −0.013 (0.015) No 0.087
−0.044 (0.017)* 0.18 (0.024)* 0.14 (0.016)* 0.21 (0.020)*
−0.030 (0.013)*
−0.041(0.012)* −0.059 (0.017)* 0.17 (0.024)* 0.13 (0.015)* 0.19 (0.019)*
(5)
(4)
282 D. A. Green, T. Lemieux
Unionization and training in Canada
283
in the case of specific capital, firms may prefer to invest in more senior workers who are less likely to turnover than workers who just joined in. Since these two effects go in opposite directions, the effect of tenure on training is ambiguous. The results indicate that public sector workers are substantially more likely to obtain training than their private sector counterparts. The estimated firm size effects reveal a clear pattern of training increasing with firm size. This fits with results from earlier papers. It also causes the union effect to move from significantly positive to significantly negative, indicating that some positive perceived union effects on training are really just disguised firm size effects. The effect of tenure is not significant, indicating that the two above discussed effects may indeed be offsetting each other. In the final column, we add a set of nine industry dummy variables and nine provincial dummy variables to the specification. This changes the magnitude of the other estimated coefficients very little and does not change the implications drawn from those coefficients at all. The union impact remains negative and significant though smaller (in absolute value) than in column 4. As we saw in Table 1, patterns for overall training incidence can hide large differences within specific training categories. In Table 5, we re-estimate the most complete specification from Table 4 for four different training status dependent variables. The first column contains the results using programme training as our dependent variable. The estimates again indicate some positive relationship between education and programme training, though that relationship is not monotonic. In particular, post-secondary graduates and university graduates do less training of this type than do those with some (but not completed) post-secondary education. Since programme training really corresponds to going back to school, this result is not surprising: individuals with a university education need to get less new education because they are already highly educated. The age variables again indicate a strong negative relationship between age and training, and being in the public sector has a positive impact on training. Interestingly, there is no clear relationship between firm size and training. This may fit with the claim that programme training is true general human capital training that occurs off the firm site: there is no reason to believe that larger firms have a comparative advantage in providing such training. Nonetheless, this result is somewhat surprising because in models in which firms help pay for general human capital investment, increased job stability should lead to higher investment levels, and larger firms tend to have more job stability. Interestingly, years of tenure have a negative and significant (though declining) effect on training, which is hard to reconcile with standard human capital theory. Including all of these covariates dramatically reduces the size of the union impact on programme training. Table 1 indicated that unionized males obtained approximately 3% points less programme training than nonunion males. However, the results from column 1 indicate that training rates are essentially the same for union and nonunion males once one controls for other characteristics. Thus, the evidence that unions lead to a reduction in general human capital investment is not strong.
284
D. A. Green, T. Lemieux
Table 5 Probit results (marginal effects) for different types of training, males Variable
Programme training
Course training
General training
Firm spec. training
Union Education No HS degree Post-secondary Post-sec degree University deg. Age 17–19 20–24 35–44 45–54 55–64 Public sector Firm size 1–20 20–99 100–199 200–499 Tenure/10 Ten. Sq./100 Pseudo R2
−0.003 (0.0064)
−0.027 (0.011)*
−0.022 (0.011)*
−0.0054 (0.0067)
−0.007 (0.009) 0.094 (0.018)* 0.065 (0.010)* 0.075 (0.014)*
−0.040 (0.015)* 0.100 (0.022)* 0.085 (0.014)* 0.145 (0.018)*
−0.024 (0.016) 0.172 (0.024)* 0.129 (0.015)* 0.189 (0.019)*
−0.017 (0.007)* 0.005 (0.011) 0.015 (0.007)* 0.018 (0.009)*
0.25 (0.039)* 0.073 (0.013)* −0.032 (0.006)* −0.063 (0.005)* −0.063 (0.004)* 0.0066 (0.0108)
0.015 (0.040) 0.025 (0.019) 0.0033 (0.012) −0.0027 (0.014) −0.059 (0.016)* −0.021 (0.017)
0.287 (0.043)* 0.122 (0.020)* −0.035 (0.011)* −0.072 (0.012)* −0.126 (0.012)* −0.016 (0.018)
0.026 (0.027) −0.013 (0.010) −0.0003 (0.006) −0.002 (0.007) −0.005 (0.009) −0.010 (0.008)
0.010 (0.008) 0.012 (0.008) 0.007 (0.011) 0.025 (0.012)* −0.058 (0.016)* 0.029 (0.008)* 0.058
−0.135 (0.010)* −0.073 (0.011)* −0.034 (0.015)* −0.048 (0.014)* 0.107 (0.026)* −0.042 (0.013)* 0.12
−0.043 (0.013)* 0.003 (0.013) 0.015 (0.018) 0.017 (0.017) 0.004 (0.026) 0.003 (0.013) 0.21
−0.062 (0.005)* −0.045 (0.005)* −0.029 (0.006)* −0.028 (0.005)* 0.024 (0.014)+ −0.009 (0.007) 0.13
Note 8,074 observations used in the estimation. Standard errors are in parentheses. Controls for managerial responsibility, industry and province are included but not reported in the table *,+ Mean effect is significantly different from zero at the 5 and 10% significance level, respectively
In column 2, we present results from the same specification with a dummy variable corresponding to course training as the dependent variable. We argued earlier that course training could be viewed as providing a relatively broad definition of firm specific training. For training of this type, education again has a strong and positive effect on training. Interestingly, the effects of age are no longer as clear, with all age groups below age 55 having quite similar training rates. This appears to indicate that as long as there is at least ten years of an individual’s working life left, firms and workers believe it is worthwhile continuing to invest in this type of training. While this is a reasonable use of training, the fact that it does not decline at all with age below 55 years old is surprising. In contrast to programme training, firm size shows up with a strong pattern, positively related to course training. Tenure has a positive and declining effect on course training, which is consistent with standard human capital theory. The impact of adding these controls is quite dramatic. The union effect on course training goes from positive 3% points in Table 1 to negative 3% points in this table. Column 3 contains results using our second definition of general human capital, which includes both programme training and any course training not provided by the employer. The results using this definition are quite similar to those presented in column 1 except for the effect of tenure which is now positive,
Unionization and training in Canada
285
as expected, but not significant. The union impact estimated with the second definition of general training is again negative and larger than that estimated with the first definition, though still not very substantial. Finally, column 4 contains estimates using our more restricted definition of firm specific training: course training that is directly provided by the employer. The patterns again indicate positive education effects but, as in column 2, there is no clear age pattern. There is again a relatively clear firm size pattern but a weaker tenure effect. The union impact is both economically insubstantial and statistically insignificant. If we use column 1 as our most precise definition of general human capital training and column 4 as our most precise definition of firm specific training, then the conclusion from Table 5 is that unionization has essentially no impacts on either general or firm specific human capital investment once one controls for other covariates. Further investigation indicates that the sizeable reduction in the union impact on programme training witnessed in Table 5 relative to Table 1 arises primarily because of the introduction of controls for age, which has negative effects on training and is positively related to union status. In contrast, the reduction in the impact of unionization on firm specific training stems mainly from the introduction of firm size variables. Results from the same exercises for females are presented in Tables 6 and 7. In Table 6, we recreate the exercise from Table 4 in which we introduce sequentially a set of covariates to investigate the impact of controlling for them upon our union effect estimate. For males, this exercise ultimately had very little impact on the estimated union effect. However, for females, introducing the covariates reduces the impact of unionization on overall training from 0.080 to −0.036. The latter estimate is very similar to that found for males, suggesting that the large differences between males and females in the first row of Table 1 arise from differences in the distributions of observable covariates between males and females. The patterns in training relative to the other observed characteristics are quite similar to those found for males: both education and firm size have positive effects on training, while age has a negative impact. In Table 7, we present the results of probits estimated with different definitions of general and firm specific training as the dependent variables for females. As for males, the union impact is small and negative both for programme and course training. The alternative human capital investment measures also yield similar conclusions for females than for males. In particular, the impact of unionization on general training is negative and statistically significant, while the impact of unionization on firm specific training is not statistically significant. Overall, once one controls for the impacts of other covariates, the impacts of unions on training are generally small for both males and females. The only exception is the broader measure of general training for which the union impact is negative and significant for both men and women. 6 Probit estimates of source of payment As with the study of the incidence of training, correlations between unionization status and other covariates raise questions of whether simple tabulations
0.174 (0.045)* 0.076 (0.021)* −0.004(0.013) 0.001 (0.014) −0.109(0.018)* – – – – – –
No 0.059
– – – – – – – – – – –
No 0.0054
Note 8,608 observations used in the estimation. Standard errors are in parentheses *,+ Mean effect is significantly different from zero at the 5 and 10% significance level, respectively
No 0.071
– – – –
0.185 (0.046)* 0.086 (0.021)* −0.005 (0.013) −0.003 (0.014) −0.115 (0.018)* 0.131 (0.012)* –
−0.109 (0.017)* 0.122 (0.022)* 0.153 (0.014)* 0.247 (0.018)*
−0.120 (0.017)* 0.123 (0.022)* 0.155 (0.014)* 0.260 (0.018)*
– – – –
3 0.069 (0.011)*
0.080 (0.011)*
Union Education No HS degree Post-secondary Post-sec degree University deg. Age 17–19 20–24 35–44 45–54 55–64 Manager Public sector Firm size 1–20 20–99 100–199 200–499 Tenure/10 Ten. Sq./100 Ind. and prov. dummies Pseudo R2
2 0.060 (0.011)*
1
Variable
Table 6 Simple probit results (marginal effects) for training status, females
–0.162 (0.012)* −0.059 (0.014)* −0.030 (0.020) −0.009 (0.018) 0.020 (0.029) −0.007 (0.014) No 0.094
0.202 (0.046)* 0.103 (0.022)* −0.019 (0.013) −0.018 (0.015) −0.112 (0.019)* 0.119 (0.013)* 0.093 (0.015)*
−0.152 (0.013)* −0.058 (0.014)* −0.029 (0.020) −0.002 (0.018) 0.002 (0.029) 0.001 (0.014) Yes 0.116
0.235 (0.047)* 0.128 (0.023)* −0.027 (0.013)* −0.027 (0.015)+ −0.124 (0.018)* 0.109 (0.013)* 0.051 (0.018)*
−0.073 (0.019)* 0.093 (0.022)* 0.140 (0.015)* 0.193 (0.019)*
−0.036 (0.013)*
−0.020(0.013) −0.088 (0.018)* 0.111 (0.022)* 0.154 (0.015)* 0.219 (0.018)*
5
4
286 D. A. Green, T. Lemieux
287
Unionization and training in Canada Table 7 Probit results (marginal effects) for different types of training, females Variable
Programme training
Course training
General training
Firm spec. training
Union Education No HS degree Post-secondary Post-sec degree University deg. Age 17–19 20–24 35–44 45–54 55–64 Public sector Firm size 1–20 20–99 100–199 200–499 Tenure/10 Ten. Sq./100 Pseudo R2
−0.006 (0.007)
−0.025 (0.011)
−0.044 (0.012)*
0.006 (0.006)
−0.003 (0.012) 0.075 (0.017)* 0.076 (0.010)* 0.093 (0.015)*
−0.068 (0.016)* 0.039 (0.019)* 0.076 (0.013)* 0.123 (0.017)*
−0.053 (0.017)* 0.081 (0.021)* 0.128 (0.014)* 0.175 (0.018)*
−0.020 (0.008)* 0.013 (0.010) 0.009 (0.006) 0.012 (0.008)
0.185 (0.040)* 0.117 (0.016)* −0.024 (0.006)* −0.035 (0.007)* −0.062 (0.006)* 0.0005 (0.0090)
0.033 (0.045) −0.052 (0.017)* 0.007 (0.012) 0.012 (0.014) −0.052 (0.017)* 0.042 (0.015)*
0.223 (0.046)* 0.101 (0.020)* −0.034 (0.011)* −0.044 (0.013)* −0.113 (0.015)* 0.017 (0.015)
−0.024 (0.019) 0.008 (0.011) 0.009 (0.006) 0.013 (0.007)+ −0.0002 (0.010) 0.021 (0.008)*
−0.026 (0.007)* −0.010 (0.007) −0.015 (0.010) −0.011 (0.009) −0.0078 (0.016)* 0.024 (0.008)* 0.126
−0.129 (0.010)* −0.052 (0.012)* −0.007 (0.017) 0.013 (0.016) 0.087 (0.025)* −0.032 (0.012)* 0.137
−0.071 (0.012)* −0.008 (0.013) −0.009 (0.018) 0.006 (0.017) −0.081 (0.026)* 0.042 (0.013)* 0.088
−0.064 (0.005)* −0.028 (0.005)* −0.004 (0.005) −0.001 (0.007) 0.048 (0.012)* −0.024 (0.006)* 0.137
Note 8,608 observations used in the estimation. Standard errors are in parentheses. Controls for managerial responsibility, industry and province are included but not reported in the table *,+ Mean effect is significantly different from zero at the 5 and 10% significance level, respectively
of union impacts on the sources of payment for training reported in Table 2 reflect true union impacts. Again, we wish to control for other covariates and re-estimate the union impact. To do this, we run the same specification as was used in Tables 5 and 7 with two new dependent variables: (1) a dummy variable corresponding to whether an employer helped pay for the training; and (2) a dummy variable equal to one if the individual helped pay for the training but the employer did not. The first dependent variable is intended to capture any employer involvement in financing training. The second focuses exclusively on individual contributions. We examine sources of payment for our two definitions of general human capital and one (all courses) definition of specific human capital. Table 8 reports the marginal effects calculated using estimated coefficients from a probit with the first dependent variable. We only report the marginal effects associated with union and tenure, since few interesting patterns emerged for the other regressors such as age or education. The first column of the first panel shows results for males who reported taking programme training. Recall that the results in Table 2 indicate that unionized employers are more likely to pay for such training than are nonunion employers. This result appears to hold up once one controls for other covariates, although the union differential is both smaller than in Table 2 and not statistically significant. Tenure has a large
288
D. A. Green, T. Lemieux
Table 8 Probit results (marginal effects) for who paid for training Variable
Programme training
Course training
General training
Males: employer paid for training Union 0.042 (0.062) Tenure/10 0.882 (0.160)* Ten. Sq./100 −0.319 (0.087)*
0.023 (0.015) 0.173 (0.035)* −0.058 (0.018)*
0.022 (0.028) 0.582 (0.071)* −0.203 (0.037)*
Males: worker alone paid for training Union −0.056 (0.054) Tenure/10 −0.673 (0.148)* Ten. Sq./100 0.270 (0.081)*
−0.025 (0.011)* −0.154 (0.029)* 0.058 (0.014)*
−0.034 (0.022) −0.474 (0.060)* 0.185 (0.032)*
Females: employer paid for training Union −0.008 (0.050) Tenure/10 0.579 (0.110)* Ten. Sq./100 −0.202 (0.064)*
−0.015 (0.016) 0.233 (0.034)* −0.091 (0.017)*
−0.022 (0.030) 0.661 (0.067)* −0.233 (0.035)*
Females: worker alone paid for training Union −0.071 (0.055) Tenure/10 −0.379 (0.125)* Ten. Sq./100 0.130 (0.071)+
0.027 (0.014)* −0.111 (0.027)* 0.046 (0.013)*
−0.003 (0.027) −0.418 (0.060)* 0.148 (0.031)*
Note Standard errors in parentheses. The table entries correspond to probability derivatives. The estimated models include the same regressors as in Tables 5 and 7, but only the estimates for union and tenure are reported. *,+ Mean effect is significantly different from zero at the 5 and 10% significance level, respectively
and positive (but declining) effect, which is consistent with employers investing in more stable workers. Results using the broader definition of general human capital, given in column 3, are very similar. According to the course based definition of firm specific training, firms also play a greater funding role in this type of investment in the union versus the nonunion sector. Tenure has a positive though smaller effect than in the case of programme training. The second panel of results in Table 8 reports the marginal effects for the probability that the individual alone (without the help of the firm) pays for the investment with respect to our various covariates. In this case, for programme training there is no evidence of a substantial relationship between union status and self-payment for training. The same result holds for the alternative definition of general training in column 3. Interestingly, the effect of tenure now turns negative. The same patterns hold true for investment in course training in column 2. Here, though, the union effect is negative and statistically significant. The last two panels of Table 8 report the corresponding results for females. The estimates of employer contributions to training indicate union impacts that are economically smaller than those for males. In terms of worker payment for training, the results indicate that unionization leads to a decline in such payment for general training but a statistically significant increase for specific training. Overall, the results of these exercises indicate that unions have little impact on the involvement of firms and workers in paying for both general and firm specific training while leading to a decline in the proportion of workers investing in firm specific training.
Unionization and training in Canada
289
Once one controls for other covariates, then, our results paint slightly different pictures for men and women. For both men and women, unionization is related to small decreases in either general of firm specific human capital investment. There is also weak evidence that unions generate greater employer involvement in payment for both general and firm specific human capital for men. Thus, unionization appears to shift the means of payment more than the amount of investment for men. This fits with the kinds of models in which union pay structures lead to unionized firms taking a greater role in funding general human capital investment but do not necessarily change the amount of investment. To explain the small declines in general human capital investment, one could then graft onto these types of models the type of distinction between alternative human capital (useful only outside the firm) and general human capital (useful both inside and outside the current firm) proposed by Kuhn and Sweetman. In that case, more stable union work arrangements could lead to lower investment by workers while firms play an expanded role in funding general human capital. The finding that tenure has a positive effect on whether employers pay for training is quite consistent with this view. In that case, one would also expect to see the proportion of non-specific training spells funded by workers alone decrease as firms expand their role while workers invest less in alternative human capital. The negative effect of tenure on the probability of workers paying for training alone is also consistent with this view. For women, the results again indicate small and negative effects of unionization on both general and firm specific human capital investment. Both of these effects are more or less comparable to similarly estimated effects for men. In terms of payment, unionization appears to have little impact on the proportion of spells in which firms help in the funding but it does have negative (though not significant) effects on the proportion of general human capital training invested in by workers alone. As in the case of men, the most robust result is that employer involvement in training increases with tenure while the opposite happens to worker involvement.
7 Robustness checks As a further check of the robustness of the results, we re-estimated our main models using an earlier (1993) version of the AETS. The results were very similar to those obtained with the 1997 AETS. For instance, in both years the raw difference in training rates between union and nonunion workers (for men and women pooled) is 4% points to the advantage of union workers. The union advantage turns negative in both years, however, once other characteristics are controlled for using the probit models. We also re-estimated these main models using a more recent version of the AETS (2002) that asks slightly different questions about training, but found once again very similar results. All through the empirical analysis, we have assumed that the union status of workers was exogenous. As in any study of union impacts, however, this assumption may be violated if workers are selected endogenously into union
290
D. A. Green, T. Lemieux
jobs. To address this issue, we tried to use interprovincial changes in unionization rates as an underlying source of variation in union status in a setting were we pooled the years of available data (1993, 1997, and 2002) together. The hope was that changes in labour legislation, which are mostly determined at the provincial level in Canada, would provide enough variation in unionization rates to provide credible estimates of the union effect. Unfortunately, there was not enough interprovincial variation in unionization rates (weak instrument problem) for this estimation strategy to work in practice.
8 Conclusions In this paper, we have implemented an empirical investigation of the impact of unionization on training in Canada using the AETS. Simple tabulations indicate that unions have positive though small direct impacts on overall training levels. However, these overall effects hide larger differences for specific sub-groups and for different types of human capital investment. In particular, there are substantial differences between males and females. Basic tabulations also indicate some substantial differences in sources of funding for training between the union and nonunion sectors. Our main results stem from exercises in which we control for the effects of other covariates to get a cleaner picture of union impacts. Once one controls for other covariates, our results paint relatively similar pictures for men and women. If anything, these effects are typically small and negative, in the range from −4% points to 0. By contrast, when we do not control for other covariates, union effects range from 10% points (course training for women) to −3% points (programme training for men). So it appears that most of the difference in the raw union effect across subgroups is a spurious consequence of failing to control for other covariates. What unionization does to some extent do is generate greater employer involvement in payment for both general and firm specific human capital for men, though these effects are typically not significant. Thus, unionization appears to shift the means of payment more than the amount of investment for men. This fits with the kinds of models in which union pay structures lead to unionized firms taking a greater role in funding general human capital investment but do not necessarily change the amount of investment. To explain the small declines in general human capital investment for men, one could then graft onto these types of models the type of distinction between alternative human capital (useful only outside the firm) and general human capital (useful both inside and outside the current firm) proposed by Kuhn and Sweetman. In that case, more stable union work arrangements could lead to lower investment by workers while firms play an expanded role in funding general human capital. The pattern of tenure effects is also consistent with this view. For women, the results indicate effects of unionization on general and firm specific human capital investment similar to those for men. But in terms of payments, unionization appears to have, if anything, a negative impact on the
Unionization and training in Canada
291
proportion of spells in which firms help in the funding. It does not have any substantial effects on the proportion of general human capital training invested in by workers alone. As in the case of men, the most robust finding is that employer involvement increases with tenure while the opposite is true for worker involvement. One possible explanation for the generally weak union effects is that most of the effect of unions operates indirectly by increasing tenure and job stability, which in turns get employers more involved in the provision of training for workers. Acknowledgements We would like to thank André Lèonard, Zhengxi Lin, Stephen Machin, and three anonymous referees for their comments on an earlier draft of the paper. All remaining errors are ours.
References Acemoglu D, Pischke JS (1999) The structure of wages and investment in general training. J Polit Econ 107:539–572 Beaudry P, Green DA, Townsend J (2001) An investigation of changes in wage outcomes across cohorts in Canada. University of British Columbia Barron JM, Fuess SM, Loewenstein MA (1987) Further analysis of the effect of unions on training. J Polit Econ 95:632–640 Becker G (1964) Human Capital. Columbia University Press, New York Booth AL, Chatterji M (1998) Unions and efficient training. Econ J 108:328–345 Booth AL, Francesconi M, Zoega G (2003) Unions, work-related training, and wages: evidence for British Men. Indust Labor Relat Rev 57:68–91 Booth AL, Böheim R (2004) Trade union presence and employer-provided training in Great Britain. Ind Relat 43:520–545 Duncan G, Stafford F (1980) Do union members receive compensating wage differentials? Am Econ Rev 70:355–371 Dustmann C, Schönberg U (2004) Training and union wages. IZA working paper no. 1435 Freeman RB, Medoff JL (1984) What do unions do? Basic Books, New York Green F (1993) The impact of trade union membership on training in Britain. Appl Econ 25:1033–1043 Green F, Machin S, Wilkinson D (1999) Trade unions and training practices in British Workplaces. Ind Labor Relat Rev 52:179–195 Hashimoto M (1981) Firm specific human capital as a shared investment. Am Econ Rev 71:1070–1087 Kennedy S, Drago R, Sloan J, Wooden M (1994) The effect of trade unions on the provision of training: Australian Evidence. Br J Ind Relat 32:565–578 Kuhn P, Sweetman A (1999) Vulnerable seniors: unions, tenure, and wages following permanent job loss. J Labor Econ 17:671–693 Loewenstein MA, Spletzer JR (1998) Dividing the costs and returns to general training. J Labor Econ 16:142–171 Lynch L (1992) Private sector training and the earnings of young workers. Am Econ Rev 82:299–312 Mincer J (1983) Union effects: wages, turnover and job training. In: Reid JD (ed) Research in labor economics: new approaches to labor unions, Suppl 2. JAI Press, Greenwich, pp 217–252 Neal D (1995) Industry-specific human capital: evidence from displaced workers. J Labor Econ 13:653–677 Parent D (2000) Industry specific capital and the wage profile: evidence from the National Longitudinal Survey of Youth and the Panel Study of Income Dynamics. J Labor Econ 18:306–323 Stevens M (1994) A theoretical model of on-the-job training with imperfect competition. Oxford Econ Papers 46:537–562 Weiss Y (1985) The effect of labor unions on investment in training: a dynamic model. J Polit Econ 93:994–1007
Evaluating multi-treatment programs: theory and evidence from the U.S. Job Training Partnership Act experiment Miana Plesca · Jeffrey Smith
Accepted: 30 August 2006 / Published online: 19 April 2007 © Springer-Verlag 2007
Abstract This paper considers the evaluation of programs that offer multiple treatments to their participants. Our theoretical discussion outlines the tradeoffs associated with evaluating the program as a whole versus separately evaluating the various individual treatments. Our empirical analysis considers the value of disaggregating multi-treatment programs using data from the U.S. National Job Training Partnership Act Study. This study includes both experimental data, which serve as a benchmark, and non-experimental data. The JTPA experiment divides the program into three treatment “streams” centered on different services. Unlike previous work that analyzes the program as a whole, we analyze the streams separately. Despite our relatively small sample sizes, our findings illustrate the potential for valuable insights into program operation and impact to get lost when aggregating treatments. In addition, we show that many of the lessons drawn from analyzing JTPA as a single treatment carry over to the individual treatment streams. Keywords Program evaluation · Matching · Multi-treatment program · JTPA
An earlier version of this paper circulated under the title “Choosing among Alternative Non-Experimental Impact Estimators: The Case of Multi-Treatment Programs”. M. Plesca Department of Economics, University of Guelph, Guelph, ON, Canada N1G 2W1 e-mail:
[email protected] J. Smith (B) Department of Economics, University of Michigan, 238 Lorch Hall, 611 Tappan Street, Ann Arbor, MI 48109-1220, USA e-mail:
[email protected]
294
M. Plesca, J. Smith
1 Introduction In contrast to, say, clinical trials in medicine, many social programs, especially active labor market programs, embody heterogeneous treatments. Individuals who participate in such programs receive different treatments, at least in part by design. In this paper, we consider the implications of this treatment heterogeneity for program evaluation. In our conceptual discussion, we examine the links between the level of treatment aggregation in an evaluation and the parameters of interest, the evaluation design, the available samples sizes (and therefore the precision of the resulting impact estimates) and the overall value of the knowledge gained from the evaluation. We raise the possibility of misleading cancellation arising from aggregating treatments with positive and negative (or just large and small) mean impacts. We also present empirical evidence from an important evaluation of a multi-treatment program in the United States: the Job Training Partnership Act (JTPA). Our data come from an experimental evaluation denoted the National JTPA Study (NJS), which included the collection of “ideal” data on a non-experimental comparison group at some sites. Using the NJS data, we consider the impacts of disaggregated treatment types, and look for evidence of cancellation in the overall program impact estimates. As participants play an important role in determining treatment type in JTPA, we also look for differences by treatment type in the determinants of participation that might result from differences in the economics motivating participation. Finally, we examine the performance of non-experimental matching estimators applied to the three main treatment types in the JTPA program using the experimental data as a benchmark. Taken together, these analyses allow us to see the extent to which some of the lessons learned in related analyses that regard JTPA as a single aggregated treatment carry over to the disaggregated treatments. Our empirical analysis also adds (at the margin) to the literature on applied semi-parametric matching methods and, unfortunately, also illustrates the loss of precision that comes from disaggregating by treatment type. We find that many of the conclusions drawn from research that treats JTPA as a single treatment remain valid when looking at disaggregated treatments. At the same time, differences emerge when disaggregating that illustrate the value of doing so. The remainder of the paper proceeds as follows. Section 2 provides a conceptual discussion of issues related to disaggregation by treatment type. Section 3 describes the evaluation design and the NJS data, while Sect. 4 describes the econometric methods we employ. Section 5 presents our empirical results and Sect. 6 concludes.
2 Treatment aggregation and program evaluation Most active labor market policies include a variety of treatments. The JTPA program studied here offers classroom training in many different occupational
Evaluating multi-treatment programs
295
skills, subsidized on-the-job training at many different private firms, several types of job search assistance from various providers, adult basic education, subsidized work experience at various public or non-profit enterprises, and so on. Other countries also offer multiple service types to their unemployed. For example, in addition to the relatively standard fare offered by JTPA, Canada offers training in starting a small business, the New Deal for Young People (NDYP) in the United Kingdom offers participation in an “Environmental Task Force”, the Swiss system studied in Gerfin and Lechner (2002) offers language training for immigrants, and Germany places some unemployed with temporary help agencies. In most countries, individuals get assigned to one of the multiple treatments via interaction with a caseworker, though some programs, such as the U.S. Worker Profiling and Reemployment Services System (WPRS) examined in Black et al. (2003), also make use of statistical treatment rules. Moving from a program with one homogeneous treatment to a multiple treatment program greatly expands the set of possible questions of interest. In addition to the basic question of the labor market impacts of the program taken as a whole, researchers and policymakers will now want to know the impact of each treatment on those who receive it relative to no treatment and relative to other possible treatments. They will also want to know the effect of each treatment on those who do not receive it and they will likely want to know about and perhaps evaluate the system that allocates participants to treatments, as in Lechner and Smith (2007). The existing literature applies non-experimental methods to answer all of these questions and experimental methods to answer some. Most experimental evaluations focus on estimating the impacts of treatments on those who receive them, though others, such as the U.S. Negative Income Tax experiments described in Pechman and Timpane (1975) and the Canadian Self-Sufficiency Project experiment described in Michalopolous et al. (2002), include random assignment to alternative treatments. The latter aids in answering questions regarding the impacts of treatments not actually received and the effect of alternative statistical treatment rules; see Manski (1996) for more discussion. In thinking about evaluating the impact of treatments actually received on those who receive them, the key decision becomes how finely to disaggregate the treatments. Disaggregating into finer treatments avoids problems of cancellation in which the impacts of particularly effective treatments get drowned out by those of relatively ineffective treatments. At the same time, finer disaggregation implies either a loss of precision due to reduced samples sizes for each treatment or else a much more expensive evaluation (assuming reliance on survey data in addition to, or instead of, administrative records). In practice, different evaluations resolve these issues differently. Consider the case of classroom training. Both the JTPA evaluation considered here and the evaluation of the NDYP in Dorset (2006) combine all classroom training into a single aggregate treatment. In contrast, the evaluation of Swiss active labor market policy in Gerfin and Lechner (2002) distinguishes among eight different services (five of them types of classroom training) along with
296
M. Plesca, J. Smith
non-participation, and the evaluation of East German active labor market policy in Lechner et al. (2008) distinguishes among short training, long training and retraining (and non-participation). Perhaps not surprisingly, the German and Swiss evaluations both rely on administrative data, which allow much larger samples at a reasonable cost. In experimental evaluations, choices about the level of disaggregation can interact with choices about the timing of randomization (and thereby with the cost of the experiment). In the NJS, the evaluation designers faced the choice of whether to conduct random assignment at intake, which occurred at a centralized location in each site, or at the many different service providers at each site. Random assignment at intake meant lower costs and less possibility for disruption, but it also meant assignment conditional on recommended services rather than on services actually initiated. As we document below, though clearly related, these differ substantially. Randomization later would have allowed the construction of separate experimental impacts for each provider (as well as various meaningful combinations of providers). In the end, cost concerns won out, with implications that we describe in Sect. 3.2.
3 Institutions, data and evaluation design 3.1 Institutions From their inception in 1982 as a replacement for the Comprehensive Employment and Training Act to their replacement in 1998 by the Workforce Investment Act, the programs administered under the U.S. Job Training Partnership Act (JTPA) constituted the largest federal effort to increase the human capital of the disadvantaged. The primary services provided (without charge) under JTPA included classroom training in occupational skills (CT-OS), subsidized On-the-Job Training (OJT) at private firms and Job Search Assistance (JSA). Some participants (mainly youth) also received adult basic education designed to lead to a high school equivalency or subsidized “work experience” in the public or non-profit sectors. Eligibility for the JTPA program came automatically with receipt of means-tested transfers such as Food Stamps and Aid to Families with Dependent Children (AFDC — the main federal-state program for single parents) or its successor Temporary Aid to Needy Families (TANF). Individuals with family income below a certain cutoff in the preceding 6 months were also eligible for JTPA services (along with a few other small groups such as individuals with difficulty in English). The income cutoff was high enough to include individuals working full time at low wage jobs looking to upgrade their skills. Devine and Heckman (1996) provide a detailed description of the eligibility rules and an analysis of the eligible population they shaped. As part of the “New Federalism” of the early Reagan years, JTPA combined federal, state and local (mainly county) components. The federal government provided funds to the states (under a formula based on state level unemployment rates and numbers of eligible persons) and defined the basic outlines
Evaluating multi-treatment programs
297
of the program, including eligibility criteria, the basics of program services and operation, and the structure of the performance management system that provided budgetary incentives to local “Service Delivery Areas” (SDAs) that met or achieved certain targets. The states filled in the details of the performance management system and divided up the funds among the SDAs (using the same formula). The local SDAs operated the program on a daily basis, including determining participant eligibility, contracting with local service providers (which included, among others, community organizations, public community colleges and some for-profit providers) and determining, via caseworker consultation with each participant, the assignment to particular services. The performance management system provided an incentive for SDAs to “cream skim” the more employable among their eligible populations into their programs. See Heckman et al. (2002) and Courty and Marschke (2004) for more on the JTPA performance system. In thinking about participation in JTPA, differences between the U.S. and typical European social safety nets matter. Workers in the U.S. receive Unemployment Insurance (UI) for up to 6 months if they lose their job and have sufficient recent employment. Participation in JTPA does not lengthen UI eligibility. Single parents (and in some cases couples with children and both parents unemployed) can receive cash transfers. Other able bodied adults generally receive only food stamps plus, in some states, cursory cash transfers in the form of general assistance. The wealth of other programs available providing similar services to those offered by JTPA matters for the interpretation of the participation and non-participation states defined more formally subsequently. Many other government entities at the federal, state and local levels offer job search assistance or classroom training, as do many non-profit organizations. Individuals can also take courses at public 2-year colleges with relatively low tuition (and may also receive government grants or loans to help them do so). As a result of this institutional environment, many control group members receive training services that look like those that treatment group members receive from JTPA. Some of the eligible non-participant comparison group members do as well, with a handful also participating in JTPA itself during the follow-up period.
3.2 The NJS evaluation design As described in Doolittle and Traeger (1990), the evaluation took place at a non-random sample of only sixteen of the more than 600 JTPA SDAs. Eligible applicants at each site during all or part of the period November 1987–September 1989 were randomly assigned to experimental treatment and control groups. The treatment group remained eligible for JTPA while the control group was embargoed from participation for 18 months. Bloom et al. (1997) summarize the design and findings. Potential participants received service recommendations prior to random assignment. These recommendations form the basis for the three experimental
298
M. Plesca, J. Smith
“treatment streams” that we analyze in our empirical work. Individuals recommended to receive CT-OS, perhaps in combination with additional services such as JSA but not including OJT, constitute the CT-OS treatment stream. Similarly, individuals recommended to receive OJT, possibly in combination with additional services other than CT-OS, constitute the OJT treatment stream. The residual “Other” treatment stream includes individuals not recommended for either CT-OS or OJT (along with a small number recommended for both). Placing the recommendations prior to randomization allows the estimation of experimental impacts for sub-groups likely to receive particular services. Our analysis focuses on treatment streams because the design just described implies that we have an experimental benchmark for the treatment streams but not for individual treatments. The NJS also includes a non-experimental component designed to allow the testing of non-experimental evaluation estimators as in LaLonde (1986) and Heckman and Hotz (1989). To support this aspect of the study, data on Eligible Non-Participants (ENPs) were collected at 4 of the 16 experimental sites — Corpus Christi, TX; Fort Wayne, IN; Jersey City, NJ; and Providence, RI. We focus (almost) exclusively on these four SDAs in our empirical analyses.
3.3 Data from the NJS The data we use come from surveys administered to the ENPs and to controls at the same four sites. These surveys include a long baseline survey, administered shortly after random assignment (RA) for the controls and shortly after measured eligibility (EL) for the ENPs, and follow-up surveys (one or two for the controls and one for the ENPs). Heckman and Smith (1999, 2004) describe the data sets and the construction of these variables in greater detail. The data on earnings and employment outcomes come from the follow-up surveys. In particular, for the bias estimates we use the same quarterly selfreported earnings variables used in Heckman et al. (1997) and Heckman et al. (1998a). The variables measure earnings (or employment, defined as non-zero earnings) in quarters relative to the month of RA for the controls and of EL for the ENPs. In our work, we aggregate the six quarters after RA/EL into a single dependent variable. Appendix B of Heckman et al. (1998a) describes the construction of these variables, and the resulting analysis sample, in detail. We focus on these variables in order to make our results comparable to those in earlier studies. Our experimental impact estimates use the same earnings variables as in the official impact reports by Bloom et al. (1993) and Orr et al. (1994). These variables differ in a variety of ways; see those reports as well as Heckman and Smith (2000) for more details. Both the JTPA program and the NJS divide the population into four groups based on age and sex: adult males and females aged 22 and older and male and female youth aged 16–21. We focus solely on the two adult groups in this study as they provide the largest samples. Table 1 displays the sample sizes for our analyses, divided into ENPs, all controls and controls in each of the three treatment streams. Two main points
299
Evaluating multi-treatment programs Table 1 Sample sizes used in estimation
Adult males Propensity score sampleb Observations with non-missing earningsc After imposing min–max common support Observations with non-missing employmentc After imposing min–max common support Adult females Propensity score sampleb Observations with non-missing earningsc After imposing min–max common support Observations with non-missing employmentc After imposing min–max common support
ENPa
All CTRLs
CT-OS
OJT
Other
818 391
734 499 391 502 394
75 57 49 57 49
374 277 207 279 209
285 165 102 166 103
869 660 640 665 645
265 207 200 208 201
341 271 255 274 258
263 182 178 183 179
412
1,569 870 896
a No ENP observations are lost due to imposing a common support restriction b The propensity score sample consists of all individuals aged 22 to 54 who completed the long
baseline survey and have valid values of the age and sex variables. This is the same sample employed in Heckman and Smith (1999). The sub-samples of the propensity score samples with non-missing values of employment and earnings in the six quarters before and after RA/EL are used in estimating the biases c The sample size for cell matching on the labor force status transitions is slightly smaller than shown here as we cannot use observations with (fractional) imputed values for the transitions. The sample sizes for some of the other estimators are slightly smaller than shown here because the cross-validation sometimes chooses a particular kernel that implicitly imposes a stronger common support restriction
emerge from Table 1: first, our sample sizes, though respectable in comparison to the widely used data from the National Supported Work Demonstration, remain small given that we apply semi-parametric estimation methods. Second, treatment stream assignment does not happen at random. In our data, streams related to services that imply immediate job placement, namely the OJT and “other” streams, have relatively many men, while CT-OS has relatively many women. See Kemple et al. (1993) for a detailed descriptive analysis of assignment to treatment stream in the NJS as a whole. Table 2 indicates the fraction of the experimental treatment group receiving each JTPA service type at the four sites; note that individuals may receive multiple services. Quite similar patterns appear for the full NJS treatment group. These data indicate the extent to which the treatment streams correspond to particular services and aid in the interpretation of the experimental impact estimates presented subsequently. The table highlights two main patterns. First, treatment stream assignment predicts receipt of the corresponding service. For example, among adult women in the CT-OS treatment stream, 58.5% receive some CT-OS, compared to 2.3% in the OJT stream and 10.6% in the “other” stream. Second, as analyzed in detail in Heckman et al. (1998c), many treatment group members, especially those in the OJT and “other” treatment streams, never enroll in JTPA and receive a service. Some treatment group members
300
M. Plesca, J. Smith
Table 2 Treatment streams and service receipt – Percentage of treatment group members receiving each service type: four ENP sites Actual services received
Adult males None CT-OS OJT JSA ABE Others Adult Females None CT-OS OJT JSA ABE Others
Experimental treatment stream Overall
CT-OS
OJT
Other
44.33 8.27 14.90 27.52 2.86 12.25
33.94 57.80 0.92 21.10 6.88 3.67
48.44 0.94 24.17 33.85 0.10 0.42
41.95 2.97 6.64 20.90 5.37 30.93
46.02 22.02 9.05 26.94 4.92 5.91
30.16 58.52 0.16 26.89 10.49 3.11
52.94 2.27 19.79 32.09 0.53 0.53
52.91 10.55 5.05 21.10 4.74 14.68
Notes: 1 The experimental treatment streams are defined as follows, based on service recommendations prior to random assignment: The CT-OS stream includes persons recommended to receive CT-OS, possibly along with services other than OJT, prior to random assignment. The OJT stream includes persons recommended to receive OJT, possibly along with services other than CT-OS, prior to random assignment. The Other services stream includes everyone else. 2 The proportions for actual services received do not have to sum to one because individuals can receive multiple services. These services are: None is for individuals who do not receive any treatment (drop-outs). CT-OS is classroom training in occupational skills. OJT is on-the-job training. JSA is job search assistance. ABE is adult basic education. Other is a mix of other services
received limited services but did not enroll (for reasons related to the gaming of the JTPA performance management system); Table 2 includes only enrollees. Only a handful of control group members overcame the experimental protocol and received JTPA services in the 18 months after random assignment. At the same time, many control group members, particularly in the CT-OS treatment stream, did receive substitute services from other sources. On average, these services started later than those received by treatment group members and included fewer hours. Exhibits 5.1 and 5.2 of Orr et al. (1994) document the extent of control group substitution for the full NJS; the fraction receiving services in the treatment group exceeds that in the control group by 15–30 percentage points depending on the demographic group and treatment stream. These exhibits combine administrative data on service receipt for the treatment group with self-reports for the control group. Smith and Whalley (2006)
Evaluating multi-treatment programs
301
compare the two data sources. See also Heckman et al. (2000), who re-analyze the CT-OS treatment stream data to produce estimates of the impact of training versus no training. Table 3 presents descriptive statistics on the variables used in the propensity score estimation. Table A1 provides variable definitions. One important variable, namely the labor force status transitions, requires some explanation. A labor force status consists of one of “employed”, “unemployed” (not employed but looking for work) and “out of the labor force” (OLF — not employed and not looking for work). Each transition consists of a pair of statuses. The second is always the status in the month of RA/EL. The first is the most recent prior status in the 6 months before RA/EL. Thus, for example, the transition “emp → unm” indicates someone who ended a spell of employment in the 6 months prior to RA/EL to start a spell of unemployment that continued through the month of RA/EL. Transitions with the same status on both sides, such as “unm → unm” correspond to individuals who maintain the same status for all 7 months up to and including the month of RA/EL. The descriptive statistics reveal a number of interesting patterns. Dropouts (those in the first two education categories) differentially sort into OJT among men but into CT-OS and other among women. Overall, the controls have more schooling than the ENPs. Among adult women, long-term welfare recipients (those in the last welfare transition category) differentially sort into CT-OS, while those not recently on welfare (and in the first transition category) differentially sort into OJT and other. In both groups, individuals unemployed at RA/EL, especially those recently employed or persistently unemployed, differentially sort into the control group; within this group, among men the recent job losers differentially sort into the OJT stream. 4 Econometric methods 4.1 Notation and parameters of interest This section defines our notation and describes the parameters of interest for the empirical portion of our study. We proceed in the context of the potential outcomes framework variously attributed to Neyman (1923), Fisher (1935), Roy (1951), Quandt (1972) and Rubin (1974). Imbens (2000) and Lechner (2001) extend this framework to multi-treatment programs. Within this framework, we can think about outcomes realized in counterfactual states of the world in which individuals experience treatments they did not receive in real life. We denote individuals by “i” and treatments by “j” with Yij signifying the potential outcome for individual “i” in treatment “j”. In many multi-treatment program contexts (including ours), it makes sense to single out one treatment as the “no treatment” baseline, which we assign the value j = 0. Let Dij ∈ {0, 1} be treatment indicators for each of the j = 0, . . . , J treatments, where Dij = 1 if individual “i” receives treatment “j” and Dij = 0 otherwise, where of necessity J J j=0 Dij = 1 for all “i”. The observed outcome then becomes Yi = j=0 Dij Yij .
302
M. Plesca, J. Smith
Table 3 Descriptive statistics Adult males ENP
CT-OS
Adult females OJT
Other
ENP
CT-OS
OJT
Other
Mean age 34.26 29.63 31.99 32.06 33.65 30.26 31.88 32.84 Education < 10years 31.76 13.70 25.14 15.44 33.76 23.85 19.70 23.95 10–11 years 17.59 21.92 21.55 27.21 18.91 18.85 19.70 21.67 12 years 29.66 38.36 33.70 36.03 33.56 40.38 47.27 34.60 13–15 years 13.39 21.92 15.47 17.65 10.87 15.38 11.52 15.97 >15 years 7.61 4.11 4.14 3.68 2.90 1.54 1.82 3.80 Race White 38.38 17.33 64.71 39.30 37.99 20.75 58.36 36.50 Black 11.74 36.00 19.79 41.75 19.28 35.09 23.17 39.16 Hispanic 44.19 38.67 14.17 14.04 38.12 41.89 15.25 22.81 Other 5.69 8.00 1.34 4.91 4.61 2.26 3.23 1.52 Marital status Single 26.17 65.28 43.02 56.55 33.50 56.25 30.89 43.95 Living with spouse 68.60 20.83 36.47 28.84 51.98 19.58 28.03 22.87 Div./ wid./ separated 5.23 13.89 20.51 14.61 14.52 24.17 41.08 33.18 Family income last year 0–$3,000 16.59 31.71 28.81 42.01 46.48 60.10 38.06 50.52 $3,000–$9,000 17.26 34.15 28.81 23.08 20.02 20.69 34.41 26.80 $9,000–$15,000 21.68 14.63 20.16 19.53 14.45 10.84 14.17 9.28 >$15,000 44.47 19.51 22.22 15.38 19.04 8.37 13.36 13.40 Welfare transition patterns No welf. → no welf. 60.17 72.00 75.67 76.14 44.50 33.21 47.80 40.30 No welf. → welfare 1.45 12.00 8.56 7.02 1.64 7.92 13.20 11.41 Welfare → no welf. 1.09 4.00 1.07 2.46 1.71 1.89 3.52 1.90 Welfare → welfare 13.80 9.33 13.64 11.23 36.98 56.60 34.60 44.87 Indicator for 23.49 2.67 1.07 3.16 15.17 0.38 0.88 1.52 missing welfare info. Labor force transition patterns emp → emp 70.22 14.29 20.99 18.83 36.58 15.73 19.46 13.62 unm → emp 6.99 11.11 13.27 8.79 4.09 2.02 8.72 9.36 olf → emp 2.50 4.76 4.94 5.02 4.50 1.61 5.37 4.68 emp → unm 5.16 28.57 28.40 27.20 3.76 12.50 24.16 18.72 unm → unm 4.99 23.81 17.90 13.81 4.09 16.94 12.42 14.04 olf → unm 1.33 7.94 3.09 7.95 3.85 8.47 9.40 10.64 emp → olf 1.50 3.17 6.79 5.02 5.65 7.66 7.72 4.26 unm → olf 0.33 1.59 2.16 1.67 2.37 5.24 2.68 4.26 olf → olf 6.99 4.76 2.47 11.72 35.11 29.84 10.07 20.43 Sum of earnings 6 pre-RA/EL 16838.7 9607.7 10401.6 9857.0 6096.8 4276.1 6795.8 5308.4 quarters 6 post-RA/EL 18902.1 9975.9 13196.9 12037.4 7112.7 5750.3 9131.7 7213.2 quarters Employed In quarter 6 0.759 0.564 0.680 0.680 0.454 0.416 0.534 0.488 before RA/EL In quarter 6 0.736 0.667 0.703 0.675 0.498 0.486 0.672 0.552 after RA/EL
Note: The descriptive statistics apply to the sample used to estimate the propensity scores
Evaluating multi-treatment programs
303
In our data j = 1 denotes the CT-OS treatment stream, j = 2 denotes the OJT treatment stream and j = 3 denotes the other treatment stream. To reduce notational burden we omit the “j” subscript when it is not needed. Within treatment stream “j”, individuals randomly assigned to the experimental treatment state experience Yij and those randomly assigned to the control group (along with the ENPs) experience Yi0 . These states embody both failure to enroll in JTPA in the first case and possible service receipt from other programs (by both the controls and the ENPs) in the second case. The most common parameter of interest in the literature consists of the average impact of treatment “j” on the treated, given by ATETj = E(Yj |Dj = 1) − E(Y0 |Dj = 1). This parameter indicates the mean effect of receiving treatment “j” relative to receiving no treatment for those individuals who receive treatment “j”. The average treatment effect on the treated for the multi-treatment program as a whole consists of a weighted (by the fraction in each treatment) average of the ATETj . We have the rich data on conditioning variables required to justify the matching methods we use only for the experimental controls and the ENPs. As a result, rather than estimating average treatment effects, we follow Heckman et al. (1997, 1998a) in estimating the bias associated with applying matching based on covariates X to these data to estimate ATETj , j ∈ {1, 2, 3}. For treatment stream “j” this bias equals BIASj =
E(Y0i |Xi , Dij = 1) − E(Y0i |Xi , Di0 = 1) df (X|Dij = 1),
where the first term inside the square brackets corresponds to the experimental control group for treatment stream “j” and the second term corresponds to the ENPs. Integrating with respect to the distribution of observables for the control group reflects our interest in the bias associated with estimating the ATET. If BIASj = 0 then matching using conditioning variables X solves the selection problem in this context for treatment stream “j”. In essence, we view each treatment stream as a separate program and estimate the bias associated with using matching to estimate the ATET for that treatment stream using the ENPs as a comparison group. The literature defines a variety of other parameters of interest. The unconditional average treatment effect, defined as ATEj = E(Yj ) − E(Y0 ), provides useful information when considering assigning all of some population to a particular treatment. In a multi-treatment program context, Imbens (2000) and Lechner (2001) define a variety of other parameters, such as the mean impact of receiving treatment “j” relative to treatment “k” for those who receive treatment “j” and the mean impact of treatment “j” on those who receive either treatment “j” or treatment “k”. Due to the nature of our data we do not examine
304
M. Plesca, J. Smith
these additional parameters, nor do we use the more complicated apparatus of multi-treatment matching developed by Imbens (2000) and Lechner (2001). Moreover, all of the parameters defined in this section represent partial equilibrium parameters, in the sense that they treat the potential outcomes as fixed when changing treatment assignment. The statistics literature calls this the Stable Unit Treatment Value Assumption (SUTVA). Heckman et al. (1998b), Lise et al. (2005) and Plesca (2006) discuss program evaluation in a general equilibrium context.
4.2 Identification Our empirical analysis follows the literature that treats JTPA as a single treatment by using the experimental data as a benchmark against which to judge the performance of semi-parametric matching estimators. We use matching for four reasons. First, it performs reasonably well in the existing literature at evaluating the aggregated JTPA treatment. Second, we have very rich data on factors related to participation and outcomes, including monthly information on labor force status in the period prior to the participation decision. The existing literature, in particular Card and Sullivan (1988), Heckman and Smith (1999) and Dolton et al. (2006) emphasizes both the importance of conditioning on past labor market outcomes and doing so in flexible ways. Third, relative to least squares regression, matching only compares the comparable when constructing the estimated, expected counterfactual, allows for more flexible conditioning on the observables and allows an easier examination of the support condition. Fourth, while this does not make matching any more plausible, we lack the exclusion restrictions required to use IV or the bivariate normal selection model of Heckman (1979). Furthermore, Heckman and Smith (1999) find, for reasons discussed below, that longitudinal estimators fare poorly in this context. Matching estimators of all sorts rely on the assumption of selection on observables; that is, they assume independence between treatment status and untreated outcomes conditional on some set of observable characteristics. In the matching literature, this gets formalized as the conditional independence assumption (CIA), Y0 ⊥D|X, where “⊥” denotes independence. The statistics literature calls this assumption unconfoundedness. As noted in Heckman et al. (1997, 1998a) our problem actually requires only mean independence, rather than full independence. We invoke the CIA separately for each of the three treatment streams. Rosenbaum and Rubin (1983) show that if you can match on some set of conditioning variables X, then you can also match on the probability of participation given X, or the propensity score, given by P(X) = Pr(D = 1|X). Their finding allows the restatement of the CIA in terms of P(X). Matching (or weighting) on estimated propensity scores from a flexible parametric propensity score model reduces the non-parametric dimensionality of the problem from the number of conditioning variables to one, thus substantially increasing the rate of convergence. Use of a flexible parametric propensity score model
Evaluating multi-treatment programs
305
seems to perform as well in practice as either reducing the dimensionality of X via alternative means such as the Mahalanobis metric or estimating propensity scores semi-parametrically. See Zhao (2004) for further discussion of alternative dimension reduction schemes and Kordas and Lehrer (2004) for a discussion of semi-parametric propensity scores. In order for the CIA to have empirical content, the data must include untreated observations for each value of X observed for a treated observation. In formal terms, in order to estimate the mean impact of treatment on the treated, we require the following common support condition: P(X) < 1 for all X. This condition can hold in the population, or in both the population and the sample, though the literature often neglects this distinction. We assume it holds in the population and then impose it in the sample. As discussed in e.g. Smith and Todd (2005a), a number of methods exist to impose this condition. We adopt the simple min-max rule employed in Dehejia and Wahba (1999, 2002); under this rule, observations below the maximum of the two minimums of the estimated propensity scores in the treated and untreated samples, and above the minimum of the maximums, lie outside the empirical common support and get omitted from the analysis. We adopt this rule rather than the more elegant trimming rule employed in Heckman et al. (1997, 1998a) for simplicity given that our sensitivity analysis reveals no substantive effect of this choice (or, indeed, of simply ignoring the issue) on the results. Given our focus on pairwise comparisons between treatment types and no treatment, we apply the support condition separately for each pairwise comparison.
4.3 Estimation We estimate our propensity scores using a standard logit model. The only twist concerns adjustment for the choice-based sampling that generated our data. Our data strongly over-represent participants relative to their fraction in the population of JTPA eligibles. We follow Heckman and Smith (1999) in dealing with this issue by reweighing the logit back to population proportions under the assumption that controls represent three percent of the eligible population; see their footnote 19 for more on this. We further assume that each treatment stream represents one percent of the eligible population. Smith and Todd (2005b) show that the literature offers a variety of alternative balancing tests. These tests aid the researcher in selecting an appropriately flexible parametric propensity score model for a given set of conditioning variables X by examining the extent to which a given specification satisfies the property that E(D|X, P(X)) = E(D|P(X)). In words, conditional on P(X), the X should have the same distribution in the treated and comparison groups. In this sense, matching mimics a randomized experiment by balancing the distribution of covariates in the treatment group and the matched (or reweighted) comparison group. Balancing tests do not provide any information about the validity of the CIA. For simplicity and comparability with most of the existing literature, we focus here on the “standardized differences” described in
306
M. Plesca, J. Smith
Rosenbaum and Rubin (1985). For each variable in X, the difference equals the mean in the treatment group minus the mean in the matched (or reweighted) comparison group divided by the square root of the sum of the variances in the treated and unmatched comparison groups. Rosenbaum and Rubin (1985) suggest concern in regard to values greater than 20. As one of the results from the existing literature that we want to revisit in the disaggregated context concerns a general lack of sensitivity to the particular matching estimator selected, we report estimates from a number of different matching estimators here, along with OLS and two cell matching estimators. All matching estimators have the general form
M
1 = n1
i∈{Di =1}
⎡ ⎣Y1i −
⎤ w(i, j)Y0j ⎦ ,
j∈{Dj =0}
where n1 denotes the number of D = 1 observations. They differ only in the details of the construction of the weight function w(i, j). As described in e.g. Angrist and Krueger (1999), OLS also implicitly embodies a set of weights that, depending on the distributions of X among the participants and non-participants, can differ substantially from those implied by most matching estimators. We can also think about matching as using the predicted values from a nonparametric regression of Y0 on P(X) estimated using the comparison group sample as the estimated, expected counterfactual outcomes for the treated units. This way of thinking about matching makes it clear both that matching differs less from standard methods than it might first appear and that all our knowledge about various non-parametric regression methods, such as that in Pagan and Ullah (1999), applies in this context as well. Each matching method we consider, with the exception of the longitudinal ones, corresponds to using a different estimator for the non-parametric regression of Y0 on P(X). We consider two simple cell matching estimators. The first matches observations solely on the value of their labor force status transition variable. The second estimator stratifies based on deciles of the estimated propensity score, where the deciles correspond to the pooled sample. The applied statistics literature often uses this approach, though that literature often follows Rosenbaum and Rubin (1984) in using only five propensity score strata. As we are cautious economists rather than bold statisticians, we use 10 in our analysis. In nearest neighbor matching w(i, j) = 1 for the comparison observation that has the propensity score closest to that of treated observation “i” and zero otherwise. We implement nearest neighbor matching with replacement, so that a given comparison observation can get matched to more than one treated observation, because our data suffer from a lack of comparison group observations similar to the treated observations. Kernel matching assigns positive weight to comparison observations with propensity scores similar to that of each treated observation, where the weights decrease with the propensity score distance. Formally,
307
Evaluating multi-treatment programs
w(i, j) =
G
Pi (X)−Pj (X) an
k∈{Dj =0} G
Pi (X)−Pk (X) an
,
where G denotes a kernel function and an denotes an appropriately chosen bandwidth. We consider three commonly used kernels: the Gaussian (the standard normal density function), the Epanechnikov and the tricube. Local linear matching uses the predicted values from a local linear regression (a regression weighted by the kernel weights just defined) as the estimated expected counterfactual. Fan and Gijbels (1996) discuss the relative merits of kernel regression versus local linear regression; for our purposes, the fact that local linear regression has better properties near boundary values suggests applying it here, given our many observations with propensity scores near zero. Though not required for consistency, ex post regression adjustment following matching — essentially running a regression using the weights from the matching — can reduce bias in finite samples and also reduce the variance of the resulting estimate. The formal literature calls this bias-corrected matching. See Ho et al. (2007) for informal discussion, references and applications. Note that this procedure differs from the “regression-adjusted” matching in Heckman et al. (1997, 1998a) because here the matching step comes first. Finally, in addition to cross-sectional matching estimators, we consider two variants of the difference-in-differences matching developed in Heckman et al. (1997, 1998a). This method differs from standard differences-in-differences because it uses matching rather than linear regression to condition on X. We simply replace the post-RA/EL outcome measure with the pre–post difference to implement the estimator. Each class of matching estimators (other than cell matching) implies a bandwidth choice. Choosing a wide bandwidth (or many neighbors in the nearest neighbor matching) reduces the variance of the estimates because more observations, and thus more information, go into the predicted expected counterfactual for each observation. At the same time, a wider bandwidth means more bias, as observations less like the treated observation under consideration get used in constructing the counterfactual. In our analysis, we allow the data to resolve the matter by relying on leave-one-out cross validation as described in e.g. Racine and Li (2005) and implemented in Black and Smith (2004) to choose bandwidths that minimize the estimated mean squared error of the estimates. Fitzenberger and Speckesser (2005) and Galdo et al. (2006) consider alternative bandwidth selection schemes. In the kernel matching, we also rely on the cross-validation to choose among the Gaussian, Epanechnikov and tricube kernel functions. As the second and third of these do not imply positive weights on the whole real line, they may implicitly strengthen the support condition we impose. As we use the same ENP comparison group when analyzing each treatment stream, we need only one bandwidth for each estimator for each demographic group. Table A2 documents the bandwidth choice exercise.
308
M. Plesca, J. Smith
Heckman and Todd (1995) consider the application of matching methods in choice-based samples (such as ours). Building on the robustness of logit model coefficient estimates (other than the intercept) to choice-based sampling, they show that matching works in choice-based samples when applied using the odds ratio or the log odds ratio from an unweighed logit participation model. Theory provides no guidance on whether to use the odds ratio or the log odds ratio; as we have many estimated scores near zero, we use the odds ratio to better distinguish these values. In any event, a sensitivity analysis revealed little effect of this decision on the estimates.
5 Empirical analysis of the NJS 5.1 Experimental estimates We begin our empirical analysis by looking for the possibility of cancellation when combining impacts from the three treatment streams in the NJS data. Given that the different services (and thus the different treatment streams) involve quite different inputs in terms of time and other resources — see e.g. the cost estimates in Exhibits 6.4 and 6.5 of Orr et al. (1994) and Heinrich et al. (1999) — and given the use of different providers for the various services within JTPA, we have good reasons to expect differences in mean impacts by treatment stream. Table 4 reports experimental impact estimates over 18 and 30 months after random assignment, respectively, for both adult males and adult females. The impacts at 18 months in Table 4 are based solely on self-reported earnings from the first follow-up survey, with outliers recoded by hand by Abt Associates — the same outcome variable as in Bloom et al. (1993). The impacts at 30 months in Table 4 rely on the earnings variables from Orr et al. (1994), which combine self-reported data from both follow-up surveys with administrative data from state UI records for non-respondents in a rather unattractive way (see their Appendix A for the sordid details). We define employment as non-zero earnings in the sixth or tenth quarters after random assignment. All estimates consist of simple mean differences. Heckman and Smith (2000) analyze the sensitivity of the NJS experimental impact estimates. Table 4 reveals four important patterns. First, the impact estimates have non-trivial standard errors; conditioning on observables would not change this very much. Not surprisingly, we typically find smaller standard errors for all 16 sites than for the four ENP sites. Second, the point estimates vary a lot by treatment stream. Although not close to statistical significance at 18 months, at 30 months the employment estimates for adult females do statistically differ by treatment stream. Moreover, for both the four and 16 site estimates, three of the four comparisons have p values below 0.20. This suggests the potential for substantively meaningful cancellation when, for example, combining the strong employment impacts in quarter 10 for adult women in the CT-OS stream with the zero estimated impact for those in the OJT stream.
309
Evaluating multi-treatment programs Table 4 Adult males and females - experimental impacts by treatment stream Overall
CT-OS
OJT
Other
Experimental impacts at 18 months Impacts at four sites with ENPs a Outcome: sum of earnings over 18 months after random assignment Adult males 427.66 36.82 1264.86 −1228.18 (651.61) (1513.09) (841.77) (1285.94) Adult females 473.99 335.21 847.85 65.73 (424.48) (642.33) (652.45) (937.71) Outcome: employment in quarter 6 after random assignment Adult males 0.011 −0.032 0.040 −0.033 (0.025) (0.072) (0.032) (0.047) Adult females 0.025 0.038 0.016 0.027 (0.022) (0.041) (0.031) (0.044) Impacts at all 16 experimental sites Outcome: sum of earnings over 18 months after random assignment Adult males 572.89 397.79 831.08 297.95 (381.04) (745.08) (525.26) (809.16) Adult females 765.48 700.93 735.40 1047.86 (230.54) (318.94) (392.40) (561.26) Outcome: employment in quarter 6 after random assignment Adult males 0.016 0.005 0.021 0.015 (0.015) (0.030) (0.020) (0.030) Adult females 0.030 0.034 0.010 0.059 (0.013) (0.020) (0.020) (0.028) Experimental Impacts at 30 Months Impacts at four sites with ENPs a Outcome: sum of earnings over 30 months after random assignment Adult males 942.01 −1272.44 1964.79 −495.47 (897.57) (3008.57) (1256.27) (1376.23) Adult females 1565.85 756.99 883.45 3288.96 (627.93) (1082.29) (988.27) (1094.15) Outcome: employment in quarter 10 after random assignment Adult males −0.030 −0.090 0.006 −0.086 (0.023) (0.080) (0.030) (0.037) Adult females 0.054 0.087 0.000 0.108 (0.022) (0.045) (0.033) (0.039) Impacts at all 16 experimental sites Outcome: sum of earnings over 30 months after random assignment Adult males 1213.22 1266.72 1675.36 388.44 (580.94) (1245.81) (829.46) (1065.74) Adult females 1248.79 912.66 749.88 2638.24 (369.52) (548.18) (633.38) (761.96) Outcome: employment in quarter 10 after random assignment Adult males 0.007 −0.005 0.033 −0.036 (0.015) (0.033) (0.021) (0.027) Adult females 0.037 0.037 0.013 0.078 (0.014) (0.022) (0.022) (0.028) a Robust standard errors are in parentheses b The null hypothesis is equal impacts in the three treatment streams
Test of equality across streams b
Chi2(2) =2.71 p-value = 0.26 Chi2(2) = 0.56 p-value = 0.75 Chi2(2) = 2.06 p-value = 0.36 Chi2(2) = 0.20 p-value = 0.91
Chi2(2) = 0.41 p-value = 0.82 Chi2(2) = 0.30 p-value = 0.86 Chi2(2) = 0.18 p-value = 0.92 Chi2(2) = 2.06 p-value = 0.36
Chi2(2) = 2.20 p-value = 0.33 Chi2(2) = 3.51 p-value = 0.17 Chi2(2) = 4.25 p-value = 0.12 Chi2(2) = 5.20 p-value = 0.07
Chi2(2) = 0.91 p-value = 0.63 Chi2(2) = 4.32 p-value = 0.12 Chi2(2) = 4.05 p-value = 0.13 Chi2(2) = 3.23 p-value = 0.20
310
M. Plesca, J. Smith
5.2 Determinants of participation by treatment stream All of the services offered by JTPA aim to improve the labor market prospects of participants. At the same time, the channels through which they operate, and the economics of the participation decision related to each service, differ substantially. For example, CT-OS represents a serious investment in human capital that aims to prepare the participant for a semi-skilled occupation and thereby increase their wage. It has a higher opportunity cost than the other services because participants typically do not work while receiving training and because, unlike many European programs, participants do not receive any stipend (though they remain eligible for other transfers). OJT immediately places the participant in employment. Participants in OJT get a chance at employers who might reject them without the subsidy (which gives employers an incentive to take some risks in hiring, keeping in mind the low dismissal costs in the U.S.) as well as human capital acquired on the job. This service has low opportunity costs but has the feature that not only must the caseworker agree to provide the subsidy but a firm must also agree to hire the subsidized worker. Finally, the Job Search Assistance (JSA) received by many in the “other” services stream aims to reduce the time required to find a job, but does not aim to increase wages via increases in human capital. Because of these differences in the economics among the services offered by JTPA, we expect the nature of the selection process to differ by treatment stream. These differences may affect the timing and magnitude of the “Ashenfelter (1978) dip.” As discussed in Heckman and Smith (1999) and documented for a variety of programs in Heckman et al. (1999), the dip refers to the fall in mean earnings and employment typically observed among participants just prior to participation. These differences may also affect what variables matter, and how strongly they matter, in predicting participation conditional on eligibility. For example, we expect to see job-ready participants, as indicated by past labor force attachment and schooling, receiving OJT, and to see individuals with less human capital and with sources of income from social programs, such as single mothers on AFDC, sort into CT-OS. We begin by looking at Ashenfelter’s dip. Figures 1 and 2 present the time series of mean earnings. Figure 1 shows that (somewhat surprisingly) for adult men all three treatment streams display roughly the same pattern as the full control group in terms of both levels and dip, though with a slightly muted dip for the “other” treatment stream. Figure 2 for adult women shows similar dips across treatment streams, this time slightly magnified for the “other” treatment stream, but different initial levels across groups. Consistent with the earlier discussion, those who enter the CT-OS stream have the lowest earnings levels and those entering the OJT stream have the highest levels, which is suggestive of greater job readiness. The lack of strong differences in the pre-program dip among the treatment streams surprised us. For adult women, we also observe post-random assignment earnings growth relative to the ENPs for all three treatment streams. Heckman and Smith (1999) document that the dip, along with the post-random assignment earnings growth observed for adult female
311
Evaluating multi-treatment programs 3500 3000 2500 2000 1500 ENP
1000
CT-OS OJT OTHER All Controls
500 0
Q -4
Q-3
Q-2
Q-1
Q1
Q2
Q3
Q4
Q5
Q6
Fig. 1 Adult males – pre-RA/EL and post-RA/EL monthly earnings averaged by quarter
1800 1600 1400 1200 1000 800 600 400 200
ENP CT-OS OJT OTHER All Controls
0
Q -4 Q-3 Q-2 Q-1 Q 1 Q 2 Q 3 Q 4 Q 5 Q 6 Fig. 2 Adult females – pre-RA/EL and post-RA/EL monthly earnings averaged by quarter
controls, imply both sensitivity to the choice of before and after periods and bias in longitudinal estimators. For this reason, we focus primarily on cross-sectional estimators that condition on lagged labor market outcomes in Sect. 5.3. Table 5 presents mean derivatives (or finite differences in the case of binary or categorical variables included as a series of indicators) and associated estimated standard errors from logit models of participation in JTPA overall estimated using the full control group and the ENPs along with similar models for each treatment stream estimated using the controls from that stream. The table also presents the p values from tests of the joint significance of categorical variables
312
M. Plesca, J. Smith
Table 5 Adult males and females – mean derivatives from logit model of participation Overall
CT-OS
OJT
Other
0.163 (0.032) 0.020 (0.034) 0.107 (0.040)
−0.020 (0.008) 0.017 (0.014) 0.009 (0.016)
0.089 (0.022) −0.030 (0.019) −0.017 (0.026)
0.135 (0.033) 0.044 (0.028) 0.165 (0.037)
0.03
0.71
0.16
0.27
0.068 (0.029) 0.000 (0.027)
0.012 (0.016) 0.009 (0.014)
0.018 (0.019) −0.023 (0.019)
0.081 (0.021) 0.031 (0.021)
0.33
0.91
Males Site: Fort Wayne Site: Jersey City Site: providence Test site = 0 (p-values) Race: black Race: othera Test race = 0 (p-values) Age Age squared Test age = 0 (p-values) Education <10 years Education 10–11years Test education = 0 (p-values) Married at RA/ELb
−0.019 (0.010) 0.0002 (0.0001)
0.81 −0.016 (0.007) 0.0002 (0.0001)
0.28 −0.007 (0.006) 0.0001 (0.0001)
0.41
0.65
0.64
0.86
−0.057 (0.026) 0.015 (0.025)
−0.035 (0.014) −0.009 (0.012)
−0.010 (0.017) 0.017 (0.016)
−0.047 (0.016) 0.014 (0.014)
0.31
0.59
0.87
0.50
−0.062 (0.022)
−0.031 (0.013)
−0.034 (0.016)
−0.020 (0.014)
−0.015 (0.020) 0.018 (0.021) −0.075 (0.031)
Family income $3,000–$15,000 $3,000–$9,000
0.009 (0.008) −0.0002 (0.0001)
0.004 (0.019) 0.013 (0.036) 0.052 (0.038) −0.052 (0.043)
0.011 (0.017)
0.040 (0.024) 0.059 (0.027) 0.007 (0.029)
0.45
0.98
0.71
0.67
LF transition into unempl. LF transition into OLF Test LF transitions = 0 (p-values)
0.196 (0.031) 0.070 (0.037)
0.050 (0.016) 0.016 (0.019)
0.123 (0.025) 0.050 (0.029)
0.104 (0.021) 0.045 (0.024)
0.00
0.24
0.03
0.10
(Earnings Q-1)/1,000
−0.023 (0.012) −0.009 (0.011) 0.006 (0.002)
0.003 (0.008) −0.008 (0.007) 0.000 (0.000)
−0.022 (0.010) 0.002 (0.008) 0.003 (0.001)
−0.004 (0.008) −0.010 (0.007) 0.005 (0.001)
$9,000–$15,000 > $15,000 Test family income = 0 (p-values)
(Earnings Q-2)/1,000 (Earnings Q-3 to Q-6)/1,000
313
Evaluating multi-treatment programs Table 5 continued Overall
CT-OS
OJT
Other
Test past earnings = 0 (p-values)
0.11
0.61
0.56
0.53
Pseudo-R square
0.28
0.30
0.28
0.31
Site: Fort Wayne
0.050 (0.025) 0.020 (0.022) 0.011 (0.024)
−0.009 (0.007) 0.005 (0.009) 0.001 (0.010)
0.018 (0.014) −0.004 (0.015) −0.016 (0.011)
0.057 (0.042) 0.024 (0.029) 0.041 (0.041)
0.08
0.55
0.13
0.09
0.000 (0.016) 0.013 (0.019)
0.002 (0.010) 0.004 (0.010)
−0.010 (0.010) −0.006 (0.013)
0.011 (0.012) 0.017 (0.019)
0.75
0.92
0.69
0.44
0.091 (0.045) 0.010 (0.043) −0.005 (0.014) −0.051 (0.010)
0.025 (0.033) 0.006 (0.033) 0.000 (0.008) −0.014 (0.004)
0.042 (0.034) 0.013 (0.036) −0.003 (0.011) −0.021 (0.008)
0.031 (0.033) −0.006 (0.017) −0.003 (0.009) −0.013 (0.008)
0.26
0.44
Females
Site: Jersey City Site: Providence Test site = 0 (p-values) Race: black Race: othera Test race = 0 (p-values) Welfare trans. No welfare → welfare Welfare trans. Welfare → no welfare Welfare trans. Welfare → welfare Indicator for missing Welfare information Test welfare trans. = 0 (p-values) Age Age squared HS dropout Educ. >13 years Test education = 0 (p-values) Married at RA/ELb Family income $3,000 – $9,000 $9,000 – $15,000 >$15,000 Test family income = 0 (p-values) LF transition unm → emp LF transition
0.01 −0.003 (0.006) 0.0000 (0.0001) −0.015 (0.013) 0.001 (0.017)
0.66 −0.001 (0.003) 0.0000 (0.0001) −0.005 (0.007) 0.000 (0.009)
−0.002 (0.004) 0.0000 (0.0001) −0.010 (0.010) −0.005 (0.014)
0.001 (0.004) 0.0000 (0.0001) 0.001 (0.008) 0.006 (0.010)
0.47
0.73
0.60
0.80
−0.058 (0.018)
−0.015 (0.011)
−0.024 (0.013)
−0.021 (0.013)
0.024 (0.017) 0.008 (0.025) 0.009 (0.027)
0.005 (0.010) 0.007 (0.014) 0.005 (0.016)
0.015 (0.013) 0.005 (0.018) 0.002 (0.021)
0.005 (0.011) −0.002 (0.017) 0.007 (0.017)
0.57
0.94
0.66
0.93
0.062 (0.029) 0.039
−0.003 (0.025) 0.001
0.030 (0.020) 0.022
0.032 (0.019) 0.016
314
M. Plesca, J. Smith
Table 5 continued
olf → emp LF transition emp → olf LF transition unm → olf LF transition olf → olf LF transition into unempl. Test LF transitions = 0 (p-values) Pseudo-R square
Overall
CT-OS
OJT
Other
(0.035) 0.045 (0.029) 0.071 (0.035) 0.026 (0.024) 0.103 (0.023)
(0.028) 0.011 (0.015) 0.018 (0.018) 0.009 (0.088) 0.025 (0.013)
(0.024) 0.025 (0.021) 0.029 (0.030) −0.001 (0.020) 0.044 (0.016)
(0.021) 0.010 (0.022) 0.028 (0.023) 0.068 (0.016) 0.037 (0.016)
0.00
0.26
0.02
0.05
0.17
0.15
0.21
0.18
The values in the table are mean derivatives; standard errors are in parentheses a Due to small sample sizes, we combine the “Hispanic” and “other” categories here b RA/EL is the month of random assignment for the experimental controls and the month of
measured eligibility for the ENPs Table 6 Adult males and females — tests of equality of logit coefficientsa Adult males Site Race Age and age squared Education Married at RA/ELb Family income LF transitions Past earnings (last two quarters)
Adult females chi2(6) = 26.4 p-value = 0.00 chi2(4) = 7.03 p-value = 0.13 chi2(4) = 1.2 p-value = 0.88 chi2(4) = 4.44 p-value = 0.35 chi2(2) = 0.71 p-value = 0.70 chi2(6) = 5.26 p-value = 0.51 chi2(4) = 0.12 p-value = 1.00 chi2(4) = 1.66 p-value = 0.80
Site Race Age and age squared Education Married at RA/EL Family income LF transitions Welfare transitions
chi2(6) = 71.61 p-value = 0.00 chi2(4) = 16.54 p-value = 0.00 chi2(4) = 9.73 p-value = 0.05 chi2(4) = 5.91 p-value = 0.21 chi2(2) = 0.55 p-value = 0.76 chi2(6) = 4.12 p-value = 0.66 chi2(12) = 18.00 p-value = 0.12 chi2(8) = 3.64 p-value = 0.89
a The null hypothesis is equal coefficients across the three treatment streams b RA/EL is the month of random assignment for the experimental controls and the month of
measured eligibility for the ENPs
included as a series of indicators. Table 6 presents the chi-squared statistics and related p values from tests of the null of equal coefficients across treatment streams for particular variables or categories of variables. For the matching estimators applied below, we want to include all the variables that affect both participation and outcomes. The specifications presented here differ somewhat from those in Heckman et al. (1997, 1998a) and Heckman and Smith (1999), upon whose analyses we build. Those papers consider
Evaluating multi-treatment programs
315
economic theory, institutional knowledge, predictive power and statistical significance as variable selection criteria. Our choices emphasize the knowledge gained from those earlier papers combined with a desire for greater parsimony given the relatively smaller sample sizes available once we split the sample into treatment streams. We considered several less parsimonious specifications and found that they yielded the same general conclusions. Heckman and Navarro (2004) discuss the variable selection issue in greater depth. For adult men, our final specification includes site and race indicators, age and age squared, education categories, marital status, categories of family income in the year prior to RA/EL, labor force status transitions (collapsed into coarser categories) and own quarterly earnings in quarters prior to RA/EL. The specification for adult women differs in that it includes welfare status transitions but omits the quarterly earnings variables (which matter less for this group) and, because of the larger sample, it does not collapse the labor force status transition categories. We do not worry about the potential endogeneity of the labor force histories for reasons outlined in Frölich (2006). To produce consistent estimates of the treatment effects, we only need to balance the unobservable conditional on X and D not to make the bias zero conditional on X and D; non-parametric regression accomplishes this because it compares, in the limit, only observations with the same X (or the same propensity score). Balancing test results for all of the cross-sectional matching estimators appear in Table A3. Figures 3 and 4 show the distributions of propensity scores. Consistent with the non-trivial numbers of observations lost when imposing the common support condition in Table 1, we find important support problems for larger values of the scores. For the full control group, our findings mimic those presented in Heckman and Smith (1999, Table 6) and Heckman et al. (1998a, Table III)). In particular, they replicate the importance of the labor force transition variables for both groups, as well as the welfare transition variable for adult women and pre-RA/EL earnings for adult men. For the individual treatment streams, we find both similarities with the overall results and differences, in addition to a general reduction in precision due to the reduced sample sizes. In particular, we find evidence that the coefficients on the site variables, the race variables and, for women, the labor force status transition variables differ among the three streams. As the sites differ strongly in their relative emphasis on the different treatment types, the first finding comes as no surprise. For both groups, blacks and other non-whites have higher probabilities of assignment to the “other” stream relative to whites and lower probabilities of assignment to the OJT stream. This finding suggests that, conditional on the other covariates, caseworkers, employers or the participants themselves think that non-whites make better candidates for JSA, the most common service in the other stream, and worse candidates for OJT. This could reflect real or perceived discrimination on the part of caseworkers or firms providing OJT positions or it could mean that non-whites more often receive non-JSA services within the other stream. For adult women, the labor force transitions have much smaller mean derivatives in the CT-OS stream than in the other two streams (the same pattern holds
316
Density
M. Plesca, J. Smith 20 18 16 14 12 10 8 6 4 2 0
ENP CTRL
0.05
0.15
0.25
0.35
0.45 0.55 0.65 0.75 Propensity Score
0.85
0.95
Density
20 15
ENP CT-OS
10 5 0 0.05
0.15
0.25
0.35
0.45 0.55 0.65 0.75 Propensity Score
0.85
0.95
Density
20 15
ENP OJT
10 5 0 0.05
0.15
0.25
0.35
0.45 0.55 0.65 0.75 Propensity Score
0.85
0.95
Density
20 15
ENP OTHER
10 5 0 0.05
0.15
0.25
0.35
0.45 0.55 0.65 0.75 Propensity Score
0.85
0.95
Fig. 3 Adult males — distribution of propensity score
for adult men but does not reach the usual levels of statistical significance). Also, women out of the labor force in the 7 months up to and including RA/EL have much higher mean probabilities of participation, relative to women employed during those months, in the other treatment stream than in the CT-OS and OJT treatment streams. The labor force transition findings suggest that these variables contain information about the individual’s readiness for, and eagerness to obtain, employment; thus, they matter for the OJT and other streams, whose members typically receive OJT or JSA, both of which aim at immediate placement. Put differently, for this group, distinguishing among sets of individuals all of whom have zero earnings in the month of RA/EL (which means six of the nine transition categories) matters for adult women in two of the streams,
317
Density
Evaluating multi-treatment programs 20 18 16 14 12 10 8 6 4 2 0
ENP CTRL
0.05
0.15
0.25
0.35
0.45
0.55
0.65
0.75
0.85
0.95
Density
Propensity Score 20 18 16 14 12 10 8 6 4 2 0
ENP CT-OS
0.05
0.15
0.25
0.35
0.45
0.55
0.65
0.75
0.85
0.95
Density
Propensity Score 20 18 16 14 12 10 8 6 4 2 0
ENP OJT
0.05
0.15
0.25
0.35
0.45
0.55
0.65
0.75
0.85
0.95
Density
Propensity Score 20 18 16 14 12 10 8 6 4 2 0
ENP OTHER
0.05
0.15
0.25
0.35
0.45 0.55 0.65 0.75 Propensity Score
0.85
0.95
Fig. 4 Adult females — distribution of propensity score
and reinforces the value of collecting information on labor force status at a fine level of temporal detail. Overall, the differential effects of site, race and labor force status transitions across the three treatment streams represent important and interesting findings. These results enrich our view of how JTPA operated, suggest hypotheses to test in future evaluations of other multi-treatment programs and illustrate the potential knowledge gain associated with separately examining individual treatments within multi-treatment programs.
318
M. Plesca, J. Smith
5.3 Selection bias and the performance of matching estimators Table 7 presents bias estimates, along with bootstrap standard errors, for the matching estimators described in Sect. 4.3. We also present the estimated Root Mean Squared Error (RMSE) associated with each estimator, defined as: RMSE =
2
+ BIAS . var(BIAS)
Given the large number of estimators we examine and the computational burden associated with bootstrapping, we limited ourselves to only 50 bootstrap replications, a number likely well below that implied by the analysis in Andrews and Buchinsky (2000); the reader should thus keep in mind that the variances themselves represent noisy estimates. For simplicity, we present bootstrap standard errors for all of the estimators other than OLS and the cell matching estimators, despite the problems with doing so in the case of the nearest neighbor estimators laid out in Abadie and Imbens (2006). Fortunately, their Monte Carlo analysis suggests that use of the bootstrap does not lead to severely misleading inferences. The outcome variables consist of the sum of earnings in the 18 months after random assignment and employment in quarter six after random assignment. Recall that we estimate biases, not average treatment effects. A bias of zero means that an estimator successfully removes all differences between the experimental control group and the non-experimental comparison group. We have arranged the estimators in the tables in logical groups. OLS heads up the table followed by the two cell matching estimators, followed by the basic cross-sectional estimators, followed by the bias-corrected matching estimators, followed by the longitudinal difference-in-differences matching estimators. We can characterize the results in Table 7 in terms of five main patterns, four of which relate back to conclusions drawn in Heckman et al. (1997, 1998c) for JTPA viewed as a single treatment. First, we find little evidence of large biases from applying matching in this context, with these conditioning variables, to the individual treatment streams. We do not want to push this finding very hard as, given our standard errors, we can also not distinguish our estimates from a wide range of population bias values, both positive and negative. Moreover, we have substantively large point estimates for the biases in some cases, though less often for the better performing estimators. On the other hand, if the data wanted to send a strong signal that matching fails miserably here, they could have done so, but did not. Second, our estimates reveal the possibility of some substantively meaningful cancellation in bias across treatment streams when aggregating JTPA into a single treatment. Third, the three simple estimators — OLS, matching on labor force status transition cells and propensity score stratification — tend to have lower variances than the other matching estimators. Of the three simple estimators, stratification on the propensity score clearly dominates. Indeed, its solid performance on all three dimensions comports with its frequent use in
319
Evaluating multi-treatment programs Table 7 Adult males and females — bias estimates from propensity score matching Estimator
Overall
CT-OS
OJT
Other
Males Outcome: sum of earnings over 18 months after RA/EL 1. OLS
Bias Std. err.a RMSE
1323.0 1097.6 1719.1
258.7 1826.6 1844.9
1083.9 1264.0 1665.1
1943.0 1639.3 2542.2
2. LF transition cell matching
Bias −1667.7 Std. err. 1196.4 RMSE 2052.4
−3566.9 1640.0 3925.9
−2204.0 1350.1 2584.6
−1455.6 1575.3 2144.8
3. P-score decile cell matching
Bias Std. err. RMSE
49.6 1492.1 1493.0
−1834.9 1683.8 2490.4
−528.7 1431.3 1525.8
−2147.3 1612.7 2685.5
4. 1 Nearest neighbor matching
Bias −1281.6 Std. err. 1979.2 RMSE 2357.9
512.2 −539.3 743.8
−1332.5 1500.5 2006.7
−2969.9 2351.0 3787.8
5. Nearest 12 neighbors matching (optimal within 25 neighbors)b
Bias Std. err. RMSE
176.4 1849.4 1857.8
−1044.3 −883.4 1367.9
−265.1 1381.6 1406.8
−716.5 2121.5 2239.3
6. Optimal kernelc
Bias Std. err. RMSE
−555.5 1149.5 1276.6
−551.5 2771.0 2825.4
−1238.2 1803.6 2187.8
−2589.4 1803.5 3155.5
7. Optimal local lineara
Bias Std. err. RMSE
−369.2 1325.8 1376.3
−2077.7 2382.0 3160.8
−1535.6 1495.7 2143.6
−865.0 2054.6 2229.3
8. Bias-corrected 1 nearest neighbor matching
Bias Std. err. RMSE
−968.3 1538.3 1817.7
−58.5 3633.2 3633.6
−1100.7 1679.5 2008.1
−2562.8 2313.8 3452.8
9. Bias-corrected kernel matching
Bias Std. err. RMSE
−36.4 1008.1 1008.8
−967.0 2135.5 2344.3
−149.9 1534.1 1541.4
−1237.0 1489.5 1936.2
10. Bias-corrected local linear matching
Bias Std. err. RMSE
−968.3 1538.3 1817.7
−677.7 2337.9 2434.1
−1100.7 1679.5 2008.1
−2562.8 2313.8 3452.8
11. One nearest neighbor Bias −1278.1 difference-in-differences matching Std. err. 1864.1 RMSE 2260.2
603.4 4695.0 4733.6
−2439.0 3135.0 3972.0
−1675.0 3143.0 3561.4
−344.5 1104.8 1157.3
−2157.6 3185.8 3847.7
717.4 1398.8 1572.0
2110.7 1744.9 2738.5
12. Kernel Bias difference-in-differences matchingc Std. err. RMSE Outcome: employment in quarter 6 after RA/EL 1. OLS
Bias Std. err.a RMSE
0.135 0.048 0.143
0.165 0.085 0.186
0.093 0.053 0.107
0.165 0.061 0.176
2. LF transition cell matching
Bias Std. err. RMSE
0.080 0.050 0.094
0.067 0.078 0.102
0.052 0.054 0.075
0.083 0.059 0.101
320
M. Plesca, J. Smith
Table 7 continued Estimator
Overall
CT-OS
OJT
Other
3. P-score decile cell matching
Bias Std. err. RMSE
0.065 0.060 0.088
0.107 0.083 0.135
−0.006 0.051 0.051
0.038 0.057 0.069
4. 1 Nearest neighbor matching
Bias Std. err. RMSE
0.041 0.168 0.173
0.163 0.095 0.189
0.005 0.151 0.151
−0.019 0.190 0.191
5. Nearest 12 neighbors matching (optimal within 25 neighbors)b
Bias Std. err. RMSE
0.134 0.155 0.205
0.105 0.088 0.137
0.071 0.137 0.154
0.123 0.175 0.214
6. Optimal kerneld
Bias Std. err. RMSE
0.006 0.063 0.063
0.072 0.138 0.155
−0.049 0.068 0.084
−0.019 0.052 0.055
7. Optimal local lineard
Bias Std. err. RMSE
0.043 0.281 0.284
0.083 0.334 0.344
0.016 0.427 0.427
0.045 0.262 0.266
8. Bias-corrected 1 nearest neighbor matching
Bias Std. err. RMSE
0.043 0.070 0.082
0.125 0.157 0.201
0.005 0.082 0.082
−0.001 0.112 0.112
9. Bias-corrected kernel matching
Bias Std. err. RMSE
0.029 0.061 0.068
0.076 0.129 0.149
−0.006 0.077 0.078
0.090 0.078 0.119
10. Bias-corrected local linear matching
Bias Std. err. RMSE
0.051 0.067 0.084
0.056 0.134 0.145
0.035 0.081 0.089
0.089 0.081 0.120
11. One nearest neighbor Bias difference-in-differences matching Std. Err. RMSE
−0.086 0.191 0.209
−0.021 0.379 0.379
−0.081 0.245 0.258
−0.087 0.204 0.222
12. Kernel Bias difference-in-differences matchingd Std. err. RMSE
0.074 0.057 0.093
0.008 0.141 0.142
0.013 0.074 0.075
0.057 0.085 0.102
Females Outcome: sum of earnings over 18 months after RA/EL 1. OLS
Bias Std. err.a RMSE
1321.4 483.7 1407.2
1031.4 609.2 1197.9
2052.4 693.7 2166.5
695.8 730.7 1009.0
2. LF transition cell matching
Bias Std. err. RMSE
1569.6 445.6 1631.6
849.7 539.7 1006.5
2291.7 640.6 2379.6
1451.3 672.7 1599.6
3. P-score decile cell matching
Bias Std. err. RMSE
1181.6 537.6 1298.1
794.6 614.3 1004.4
1556.6 732.2 1720.2
127.6 786.7 797.0
4. 1 Nearest neighbor matching
Bias Std. err. RMSE
949.1 936.2 1333.2
1181.0 995.6 1544.7
1797.4 1233.3 2179.8
−511.0 1885.3 1953.3
321
Evaluating multi-treatment programs Table 7 continued Estimator
Overall
CT-OS
OJT
Other
5.
Nearest 18 neighbors matching (optimal within 20 neighbors)b
Bias Std. err. RMSE
1062.0 639.0 1239.4
796.2 663.6 1036.5
1106.5 848.8 1394.5
266.3 800.9 844.0
6.
Optimal kernele
Bias Std. err. RMSE
1176.1 577.4 1310.2
393.2 663.3 771.1
1175.6 986.5 1534.7
−564.3 1145.0 1276.6
7.
Optimal local lineare
Bias Std. Err. RMSE
857.2 574.3 1031.8
868.7 646.4 1082.8
1380.6 910.2 1653.7
−200.6 1303.2 1318.5
8.
Bias-corrected 1 nearest neighbor matching
Bias Std. err. RMSE
1229.9 723.6 1427.0
1168.4 908.0 1479.8
1576.8 1087.4 1915.4
−92.5 1435.3 1438.2
9.
Bias-corrected kernel matching
Bias Std. err. RMSE
1203.1 517.0 1309.5
816.9 641.6 1038.7
1052.8 918.5 1397.1
−551.9 1164.7 1288.8
10. Bias-corrected local linear matching
Bias Std. err. RMSE
1229.9 648.2 1390.2
1168.4 942.3 1501.0
1576.8 1032.8 1884.9
−92.5 1533.9 1536.7
11. One nearest neighbor difference-in-differences matching
Bias Std. err. RMSE
1352.6 820.8 1582.1
1862.4 1075.2 2150.5
1822.0 1561.2 2399.4
5.7 1638.6 1638.6
12. Kernel difference-in-differences matchinge
Bias Std. err. RMSE
1292.0 415.7 1357.2
754.1 605.3 967.0
2036.5 674.4 2145.3
832.2 776.9 1138.5
Outcome: employment in quarter 6 after RA/EL 1.
OLS
Bias Std. err.a RMSE
0.089 0.031 0.094
0.083 0.042 0.093
0.111 0.040 0.118
0.070 0.046 0.083
2.
LF transition cell matching
Bias Std. err. RMSE
0.093 0.032 0.098
0.056 0.041 0.069
0.137 0.041 0.143
0.085 0.046 0.097
3.
P-score decile cell matching
Bias Std. err. RMSE
0.088 0.037 0.095
0.068 0.047 0.082
0.114 0.052 0.125
0.012 0.047 0.048
4.
1 Nearest neighbor matching
Bias Std. err. RMSE
0.087 0.050 0.100
0.097 0.069 0.119
0.091 0.084 0.124
0.003 0.090 0.090
5.
Nearest 18 neighbors matching (optimal within 20 neighbors)b
Bias Std. err. RMSE
0.075 0.034 0.082
0.071 0.042 0.082
0.103 0.054 0.117
0.065 0.071 0.096
6.
Optimal kernelf
Bias Std. err. RMSE
0.077 0.030 0.082
0.000 0.058 0.058
0.121 0.043 0.129
0.025 0.054 0.060
322
M. Plesca, J. Smith
Table 7 continued Estimator
Overall
CT-OS
OJT
Other
7.
Optimal local linearf
Bias Std. err. RMSE
0.074 0.044 0.086
0.024 0.048 0.054
0.097 0.064 0.116
0.034 0.067 0.075
8.
Bias-corrected 1 nearest neighbor matching
Bias Std. err. RMSE
0.098 0.047 0.109
0.083 0.066 0.106
0.089 0.075 0.116
0.031 0.070 0.077
9.
Bias-corrected kernel matching
Bias Std. err. RMSE
0.077 0.029 0.083
0.052 0.057 0.077
0.110 0.039 0.116
0.055 0.053 0.076
10.
Bias-corrected local linear matchingf
Bias Std. err. RMSE
0.098 0.039 0.106
0.083 0.067 0.107
0.089 0.072 0.115
0.031 0.085 0.090
11.
One nearest neighbor difference-in-differences matching
Bias Std. err. RMSE
0.057 0.060 0.083
0.075 0.101 0.126
0.128 0.097 0.161
0.066 0.093 0.114
12.
Kernel difference-in-differences matchingf
Bias Std. err. RMSE
0.072 0.055 0.090
0.028 0.069 0.075
0.137 0.052 0.147
0.068 0.061 0.091
a Robust standard errors for Estimator 1 (OLS) and bootstrap standard errors (50 repetitions)
for estimators 4 to 13 b Using 12(18) neighbors in nearest neighbor matching minimizes RMSE among the first 25
neighbors for males (females) c Optimal kernel and bandwidth are chosen with cross-validation to minimize RMSE
Estimator 6 (Optimal Kernel) Epanechnikov 0.0140 (Epanechnikov 0.0062 in CT-OS) Estimator 7 (Optimal Local Linear) Tricube 0.2962 (Epanechnikov 0.0985 in CT-OS) Estimator 12 (Kernel D-I-D matching) Gaussian 0.0273 (Epanechnikov 0.0058 in CT-OS) d Optimal kernel and bandwidth are chosen with cross-validation to minimize RMSE
Estimator 6 (Optimal Kernel) Gaussian 0.0518 (Epanechnikov 0.0066 in CT-OS) Estimator 7 (Optimal Local Linear) Gaussian 0.1344 (Epanechnikov 0.0107 in CT-OS) Estimator 12 (Kernel D-I-D matching) Epanechnikov 0.0570 (Epanechnikov 0.0057 in CT-OS) e Optimal kernel and bandwidth are chosen with cross-validation to minimize RMSE Estimator 6 (Optimal Kernel) Gaussian 0.0045 Estimator 7 (Optimal Local Linear) Tricube 0.059 Estimator 12 (Kernel D-I-D matching) Gaussian 0.147 f Optimal kernel and bandwidth are chosen with cross-validation to minimize RMSE Estimator 6 (Optimal Kernel) Tricube 0.0137 Estimator 7 (Optimal Local Linear) Tricube 0.123 Estimator 12 (Kernel D-I-D matching) Tricube 0.034
the applied statistics literature and suggests its value as a baseline for more complicated matching schemes. The other two estimators do less well in terms of bias, leading to relatively mediocre RMSEs. Fourth, Heckman et al. (1997, 1998a) argued that, in general, the details of the matching method do not matter much. Our results suggest a more nuanced picture, keeping in mind, as always, the imprecision both in the bias estimates and
Evaluating multi-treatment programs
323
in the variance estimates (and thereby in the RMSE estimates). In particular, we find that single nearest neighbor matching performs quite poorly, consistent with its performance in the very useful Monte Carlo analysis in Frölich (2004). This suggests the wisdom of the general preference for kernel matching in the applied economics literature. Bias corrected single nearest neighbor matching often does better in terms of both bias and variance, supporting the use of ex post regression following matching in the applied statistics literature. We do not observe consistent improvements in RMSE from ex post regression for the other cross-sectional matching estimators in our data. Also, nearest neighbor with a number of neighbors (sometimes surprisingly large) chosen by cross-validation generally yields a noticeably lower variance than single nearest neighbor matching, as one would expect, but only modestly higher bias, so that in RMSE terms it generally wins the contest between the two estimators. Fifth and finally, as noted in Heckman and Smith (1999), no strong pattern emerges in terms of biases, variances or RMSEs that would imply a clear choice between cross-sectional and difference-in-differences matching in this context. This result differs strongly from that found by Smith and Todd (2005a) using the Supported Work data. This difference arises from the fact that the NJS data, unlike the Supported Work data, do not embody time invariant biases resulting from geographic mismatch or from the use of outcome variables measured differently for the treated and untreated units.
6 Conclusions Multi-treatment programs appear in many contexts, in particular that of active labor market policy. In this paper, we have considered the trade-offs involved in evaluating such programs as disaggregated treatments rather than an aggregate whole, and have illustrated some of our points using data from the U.S. National JTPA Study. Though our evidence suffers from the relatively small sample sizes that remain once we disaggregate, we nonetheless find interesting differences in experimental estimates and in the determinants of participation across the three treatment streams. These differences add to our understanding of the program and illustrate the potential for cancellation across treatments when aggregating to hide relevant differences among them. We also add to the literature on the performance of alternative matching estimators, where we have more to say than did the aggregative analyses in Heckman et al. (1997, 1998a). In particular, our results highlight the relatively poor performance of the widely used single nearest neighbor matching estimator. Acknowledgments We gratefully acknowledge financial support from the Social Sciences and Humanities Research Council of Canada and the CIBC Chair in Human Capital and Productivity at the University of Western Ontario. We thank Chris Mitchell for his excellent research assistance, Michael Lechner for helpful discussions, two anonymous referees and our editor, Bernd Fitzenberger, for their helpful comments and Dan Black for his Stata cross-validation code.
324
M. Plesca, J. Smith
References Abadie A, Imbens G (2006) On the failure of the bootstrap for matching estimators. Unpublished manuscript, University of California at Berkeley Andrews D, Buchinsky M (2000) A three-step method for choosing the number of bootstrap repetitions. Econometrica 68:23–51 Angrist J, Krueger K (1999) Empirical strategies in labor economics. In: Ashenfelter O, Card D (eds) Handbook of Labor Economics Vol 3A. North-Holland, Amsterdam, pp 1277–1366 Ashenfelter O (1978) Estimating the effect of training programs on earnings. Rev Econ Stat 6:47–57 Black D, Smith J (2004) How robust is the evidence on the effects of college quality? Evidence from matching. J Econ 121:99–124 Black D, Smith J, Berger M, Noel B (2003) Is the threat of reemployment services more effective than the services themselves? Evidence from the UI system using random assignment. Am Econ Rev 93:1313–1327 Bloom H, Orr L, Cave G, Bell S, Doolittle F (1993) The National JTPA Study: title II-A impacts on earnings and employment at 18 Months. Abt Associates, Bethesda Bloom H, Orr L, Bell S, Cave G, Doolittle F, Lin W, Bos J (1997) The benefits and costs of JTPA title II-A programs: key findings from the National Job Training Partnership Act study. J Hum Resources 32:549–576 Card D, Sullivan D (1988) Measuring the effect of subsidized training programs on movements in and out of employment. Econometrica 56:497–530 Courty P, Marschke G (2004) An empirical investigation of gaming responses to explicit performance incentives. J Labor Econ 22:23–56 Dehejia R, Wahba S (1999) Causal effects in non-experimental studies: re-evaluating the evaluation of training programs. J Am Stat Assoc 94:1053–1062 Dehejia R, Wahba S (2002) Propensity score matching methods for non-experimental causal studies. Rev Econ Stat 84:139–150 Devine T, Heckman J (1996) The consequences of eligibility rules for a social program: a study of the Job Training Partnership Act. Res Labor Econ 15:111–170 Dolton P, Smith J, Azevedo JP (2006) The econometric evaluation of the new deal for lone parents. Unpublished manuscript, University of Michigan Doolittle F, Traeger L (1990) Implementing the National JTPA Study. Manpower Demonstration Research Corporation, New York Dorset R (2006) The New Deal for Young People: effect on the labor market status of young men. Labour Econ 13:405–422 Fan J, Gijbels I (1996) Local polynomial modeling and its applications. Chapman and Hall, New York Fisher R (1935) The design of experiments. Oliver and Boyd, London Fitzenberger B, Speckesser S (2005) Employment effects of the provision of specific professional skills and techniques in Germany. IZA Working paper no. 1868 Frölich M (2004) Finite sample properties of propensity score matching and weighting estimators. Rev Econ Stat 86:77–90 Frölich, M (2006) A note on parametric and nonparametric regression in the presence of endogenous control variables. IZA working paper no. 2126 Galdo J, Smith J, Black D (2006) Bandwidth selection and the estimation of treatment effects with nonexperimental data. Unpublished manuscript, University of Michigan Gerfin M, Lechner M (2002) Microeconometric evaluation of active labour market policy in Switzerland. Econ J 112:854–803 Heckman J (1979) Sample selection bias as a specification error. Econometrica 47:153–161 Heckman J, Hotz VJ (1989) Choosing among alternative nonexperimental methods for estimating the impact of training programs. J Am Stat Assoc 84:862–874 Heckman J, Navarro S (2004) Using matching, instrumental variables, and control functions to estimate economic choice models. Rev Econ Stat 86:30–57 Heckman J, Smith J (1999) The pre-programme earnings dip and the determinants of participation in a social programme: implications for simple program evaluation strategies. Econ J 109:313–348
Evaluating multi-treatment programs
325
Heckman J, Smith J (2000) The sensitivity of experimental impact estimates: evidence from the National JTPA Study. In: Blanchflower D, Freeman R (eds) Youth employment and joblessness in advanced countries. University of Chicago Press, Chicago Heckman J, Smith J (2004) The determinants of participation in a social program: evidence from a prototypical job training program. J Labor Econ 22:243–298 Heckman J, Todd P (1995) Adapting propensity score matching and selection models to choicebased samples. Unpublished manuscript, University of Chicago Heckman J, Ichimura H, Todd P (1997) Matching as an econometric evaluation estimator: evidence from evaluating a job training program. Rev Econ Stud 64:605–654 Heckman J, Ichimura H, Smith J, Todd P (1998a) Characterizing selection bias using experimental data. Econometrica 66:1017–1098 Heckman J, Lochner L, Taber C (1998b) Explaining rising wage inequality: explorations with a dynamic general equilibrium model of labor earnings with heterogeneous agents. Rev Econ Dynam 1:1–58 Heckman J, Smith J, Taber C (1998c) Accounting for dropouts in evaluations of social programs. Rev Econ Stat 80:1–14 Heckman J, LaLonde R, Smith J (1999) The economics and econometrics of active labor market programs. In: Ashenfelter O, Card D (eds) Handbook of Labor Economics, Vol 3A. NorthHolland, Amsterdam, pp 1865–2097 Heckman J, Hohmann N, Smith J, Khoo M (2000) Substitution and dropout bias in social experiments: a study of an influential social experiment. Q J Econ 115:651–694 Heckman J, Heinrich C, Smith J (2002) The performance of performance standards. J Hum Resources 36:778–811 Heinrich C, Marschke G, Zhang A (1999) Using administrative data to estimate the cost-effectiveness of social program services. Unpublsihed manuscript, Univerity of Chicago Ho D, Kosuke I, King G, Stuart E (2007) Matching as nonparametric preprocessing for reducing model dependence in parametric causal inference. Forthcoming in: Political Analysis Imbens G (2000) The role of the propensity score in estimating dose-response functions. Biometrika 87:706–710 Kordas G, Lehrer S (2004) Matching using semiparametric propensity scores. Unpublished manuscript, Queen’s University Kemple J, Doolittle F, Wallace J (1993) The National JTPA Study: site characteristics and participation patterns. Manpower Demonstration Research Corporation, New York LaLonde R (1986) Evaluating the econometric evaluations of training programs using experimental data. Am Econ Rev 76:604–620 Lechner M (2001) Identification and estimation of causal effects of multiple treatments under the conditional independence assumption. In: Lechner M, Pfeiffer P (eds) Econometric evaluation of labour market policies. Physica, Heidelberg Lechner M, Smith J (2007) What is the value added by caseworkers? Labour Econ 14:135–151 Lechner M, Miquel R, Wunsch C (2008) The curse and blessing of training the unemployed in a changing economy: the case of East Germany after unification. Forthcoming in: German Economic Review Lise J, Seitz S, Smith J (2005) Equilibrium policy experiments and the evaluation of social programs. NBER working paper no. 10283 Manski C (1996) Learning about treatment effects from experiments with random assignment to treatment. J Hum Resources 31:707–733 Michalopolous C, Tattrie D, Miller C, Robins P, Morris P, Gyarmati D, Redcross C, Foley K, Ford R (2002) Making work pay: final report on the Self-Sufficiency Project for long-term welfare recipients. Social Research and Demonstration Corporation, Ottawa Neyman J (1923) Statistical problems in agricultural experiments. J R Stat Soc 2:107–180 Orr L, Bloom H, Bell S, Lin W, Cave G, Doolittle F (1994) The National JTPA Study: impacts, benefits and costs of title II-A. Abt Associates, Bethesda Pagan A, Ullah A (1999) Nonparametric econometrics. Cambridge University Press, Cambridge Pechman J, Timpane M (1975) Work incentives and income guarantees: the New Jersey negative income tax experiment. Brookings Institution, Washington DC Plesca M (2006) A general equilibrium evaluation of the employment service. Unpublished manuscript, University of Guelph
326
M. Plesca, J. Smith
Quandt R (1972) Methods of estimating switching regressions. J Am Stat Assoc 67:306–310 Racine J, Li Q (2005) Nonparametric estimation of regression functions with both categorical and continuous data. J Econ 119:99–130 Rosenbaum P, Rubin D (1983) The central role of the propensity score in observational studies for causal effects. Biometrika 70:41–55 Rosenbaum P, Rubin D (1984) Reducing bias in observational studies using subclassification on the propensity score. J Am Stat Assoc 79:516–524 Rosenbaum P, Rubin D (1985) Constructing a control group using multivariate matched sampling methods that incorporate the propensity score. Am Stat 39:33–38 Roy AD (1951) Some thoughts on the distribution of earnings. Oxford Econ Pap 3:135–146 Rubin D (1974) Estimating causal effects of treatments in randomized and non-randomized studies. J Educ Psychol 66:688–701 Smith J, Todd P (2005a) Does matching overcome LaLonde’s critique of nonexperimental methods? J Econ 125:305–53 Smith J, Todd P (2005b) Rejoinder J Econ 125:365–375 Smith J, Whalley A (2006) How well do we measure public job training? Unpublished manuscript, University of Michigan Zhao Z (2004) Using matching to estimate treatment effects: data requirements, matching metrics, and Monte Carlo evidence. Rev Econ Stat 86:91–107
Appendix Table A1 Variable definitions OUTCOMES Sum of quarterly earnings in the first 6 quarters after RA/EL (RA/EL is the date of random assignment for the controls and the date of eligibility screening for the ENPs.). Constructed from the average monthly earnings per quarter variable used in Heckman et al. (1997, 1998c) Indicator of positive earnings in the sixth quarter after RA/EL BACKGROUND VARIABLES FROM THE LONG BASELINE SURVEY Age Indicators for ages 30–39, 40–49 and 50–54 Education Highest grade of formal schooling that the respondent had completed as of the long baseline interview. Recoded into the following indicator variables for particular ranges of the highest grade completed: highest grade completed < 10 at interview time, between 10–11, 12, 13–15, and >15 Marital status The respondent’s marital status during the 12 months prior to RA/EL: currently married at RA/EL, last married 1–12 months prior to RA/EL, last married >12 months prior to RA/EL, and single or never married at RA/EL. Only the first category, currently married at RA/EL, was used in the baseline specification Family earnings in the year prior to the baseline interview The sum of the total earnings of all related household members, including the respondent, in the year prior to the baseline interview. It includes only persons in the household at the time of the interview. The total is set to missing if the employment status is missing for any related household member or if the annual earnings are missing for any employed related household member. The indicators for individual categories, which include imputations, are: family income between $0 and $3,000, between $3,001 and $9,000, between $9,001 and $15,000, and greater than $15,000. Quarterly welfare pattern before RA/EL Pattern of quarterly welfare receipt in the two quarters up to and including RA/EL. The quarterly welfare receipt variables are set to 1 if the respondent received AFDC, food stamps, or general
327
Evaluating multi-treatment programs Table A1 continued
assistance in any of the months in the quarter and to zero if they received none of these in all of the months of the quarter. The coding of the patterns created by the two quarterly variables, and the corresponding indicator variables, are as follows: no welfare → no welfare, no welfare → welfare, welfare → no welfare and welfare → welfare. Two most recent labor force status values before RA/EL Two most recent values of the monthly labor force status in the 7 months up to and including the month of RA/EL. The values of this variable, and the corresponding indicator variables, are: emp → emp, unm → emp, olf → emp, emp → unm, unm → unm, olf → unm, emp → olf, unm → olf and olf → olf Quarterly earnings in the six most recent quarters prior to RA/EL Average earnings per quarter are constructed from monthly measures self-reported by individuals IMPUTATIONS Missing values due to item non-response were imputed for the variables listed earlier. Missing values for continuous variables, such as household members, were imputed using the predicted values from a linear regression. Missing values of dichotomous variables, such as the presence of any own children in the household, were replaced with the predicted probabilities estimated in a logit equation. Missing values of indicator variables corresponding to particular values of categorical variables with more than two categories, such as the five indicators for the categories of the highest grade completed variable, were replaced by the predicted probabilities obtained from a multinomial logit model with the categorical variable as the dependent variable. In all cases, the estimating equations used to produce the imputations included: indicators for race/ethnicity, indicators for age categories, indicators for receipt of a high school diploma or a GED, and site indicators. All variables were interacted with a control group indicator. Variables included were chosen because they had no missing values in the sample. Separate imputation models were estimated for adult males and adult females
Table A2 Optimal kernel and bandwidth Kernel type
Gaussian
Cross-validation analysis for kernel matching Earnings outcome Adult males Optimal bandwidth 0.0116 Smallest RMSE 4738.5560 Minimum bandwidth 0.004 Maximum bandwidth 0.635 Grid size 55 Sample size (ENP) 357 Adult females Optimal bandwidth 0.0045 Smallest RMSE 3043.9586 Minimum bandwidth 0.001 Maximum bandwidth 0.271 Grid size 64 Sample size (ENP) 870 Employment outcome Adult males Optimal bandwidth 0.0518 Smallest RMSE 0.4287 Minimum bandwidth 0.004 Maximum bandwidth 0.618 Grid size 55 Sample size (ENP) 367
Epanechnikov
Tricube
0.0140 4724.0550
0.0140 4742.8010
0.0097 3044.8603
0.0106 3044.5481
0.1222 0.4292
0.1478 0.4293
328
M. Plesca, J. Smith
Table A2 continued Kernel type
Gaussian
Epanechnikov
Adult females Optimal bandwidth 0.2898 0.0125 Smallest RMSE 0.5006 0.4992 Minimum bandwidth 0.001 Maximum bandwidth 0.290 Grid size 65 Sample size (ENP) 896 Cross-validation analysis for local linear matching Earnings outcome Adult males Optimal bandwidth 0.0645 0.0586 Smallest RMSE 4682.6660 4669.4770 Minimum bandwidth 0.004 Maximum bandwidth 0.635 Grid size 55 Sample size (ENP) 357 Adult females Optimal bandwidth 0.2714 0.2714 Smallest RMSE 3054.8267 3056.1162 Minimum bandwidth 0.001 Maximum bandwidth 0.271 Grid size 64 Sample size (ENP) 860 Employment outcome Adult males Optimal bandwidth 0.1344 0.4640 Smallest RMSE 0.4238 0.4238 Minimum bandwidth 0.004 Maximum bandwidth 0.618 Grid size 55 Sample size (ENP) 367 Adult females Optimal bandwidth 0.2898 0.0221 Smallest RMSE 0.5014 0.4986 Minimum bandwidth 0.001 Maximum bandwidth 0.290 Grid size 65 Sample size (ENP) 896 Cross-validation analysis for difference-in-differences kernel matching Earnings outcome Adult males Optimal bandwidth 0.0273 0.0645 Smallest RMSE 3834.1230 3840.9380 Minimum bandwidth 0.004 Maximum bandwidth 0.635 Grid size 55 Sample size (ENP) 357 Adult females Optimal bandwidth 0.1472 0.2868 Smallest RMSE 2271.1581 2271.0654 Minimum bandwidth 0.001 Maximum bandwidth 0.287 Grid size 64 Sample size (ENP) 823
Tricube 0.0137 0.4992
0.2962 4594.7070
0.0591 3012.5712
0.5104 0.4250
0.1229 0.4957
0.0709 3841.2520
0.2868 2272.0574
329
Evaluating multi-treatment programs Table A2 continued Kernel type Employment outcome Adult males Optimal bandwidth Smallest RMSE Minimum bandwidth 0.004 Maximum bandwidth 0.618 Grid size 55 Sample size (ENP) 367 Adult females Optimal bandwidth Smallest RMSE Minimum bandwidth 0.001 Maximum bandwidth 0.275 Grid size 64 Sample size (ENP) 858
Gaussian
0.0389 0.4870
0.1708 0.5218
Epanechnikov
Tricube
0.0570 0.4849
0.0570 0.4855
0.0210 0.5210
0.0338 0.5209
1 The endpoints of the grid for bandwidth search are (Xmax-Xmin)/N and (Xmax-Xmin)/2. Each step increments the previous bandwidth by a factor of 1.1 2 Within each demographic group we use the same comparison group of ENPs for all three treatment streams; as a result, the optimal bandwidth is the same as well. The exception is adult males in the CT-OS treatment stream, for whom we adopt a slightly different propensity score specification due to the small sample size
Table A3 Balancing tests for adult males and females Overall
CT-OS
Adult males Nearest neighbor standardized differences Site: Fort Wayne −11.65 0.00 Site: Jersey City −5.91 0.00 Site: providence 16.87 0.00 Race: black 5.40 6.16 Race: other −5.58 −19.97 Age 6.57 17.11 Age squared 6.92 14.48 Educ. <10 years −3.30 2.42 Educ. 10–11years 1.39 −9.67 Married at RA/EL −2.65 −9.05 Fam. Inc. 3 K–9 K 6.48 . Fam. Inc. 9 K–15 K 8.97 5.73 Fam. Inc. >15 K −8.08 −17.97 LF into unempl. −2.86 26.61 LF into OLF 0.64 −14.26 Earnings Q-1 −4.63 4.63 Earnings Q-2 4.04 1.77 Earnings Q-3 to Q-6 0.05 −17.55 Standardized differences summary for cross-sectional estimators (estimators 4 to 7 from Table 7) Nearest neighbor Maximum absolute standardized difference 16.87 26.61 Instances when absolute std.dif. >20 0 1 Average absolute standardized difference 5.67 9.85
OJT
Other
1.00 −10.74 6.64 −13.56 −6.37 −0.83 −2.38 −12.50 5.72 −5.15 −17.87 −1.90 −6.22 −6.57 3.26 −6.67 −8.80 −3.79
3.94 −11.24 4.73 0.00 −8.45 −7.63 −5.63 15.81 −7.31 −8.12 15.08 16.88 −1.36 7.87 6.34 −6.63 0.88 −14.66
17.87 0 6.66
16.88 0 7.92
330
M. Plesca, J. Smith
Table A3 continued Overall
CT-OS
Optimal nearest neighbors (12) Maximum absolute standardized difference 17.62 38.79 Instances when absolute std.dif. >20 0 6 Average absolute standardized difference 5.14 13.58 Optimal kernel (Epanechnikov 0.014) Maximum absolute standardized difference 21.17 37.46 Instances when absolute std.dif. >20 1 11 Average absolute standardized difference 7.52 20.19 Optimal local linear (Gaussian 0.0045) Maximum absolute standardized difference 10.16 21.83 Instances when absolute std.dif. > 20 0 2 Average absolute standardized difference 3.99 9.60 Adult females Nearest neighbor standardized differences Site: Fort Wayne −0.65 5.04 Site: Jersey City −4.71 −7.86 Site: providence −7.02 1.21 Race: black −11.70 −3.38 Race: other 7.18 −0.98 Age 0.57 1.86 Age squared 0.52 2.26 HS dropout 8.90 4.83 Educ. >13years −10.62 9.33 Married at RA/EL 7.23 −4.70 No welf.→welf. 10.78 −18.25 Welf.→no welf. −11.41 9.97 Welf.→welf. −1.59 12.00 Welfare NA 1.16 −1.88 Fam. Inc. 3 K–9 K 5.49 0.58 Fam. Inc. 9 K–15 K −6.85 −12.90 Fam. Inc. >15 K 7.19 −4.05 LF emp→emp 2.66 5.39 LF emp→olf 1.51 −0.90 LF into unempl. 1.56 17.67 LF olf→emp −0.53 −8.76 LF olf→unm 1.15 −0.42 LF olf→olf 1.91 −10.13 Standardized differences summary for cross-sectional estimators (estimators 4 to 7 from Table 7) Nearest neighbor Maximum absolute standardized difference 11.70 18.25 Instances when absolute std.dif. >20 0 0 Average absolute standardized difference 4.91 6.28 Optimal nearest neighbors (18) Maximum absolute standardized difference 11.16 6.90 Instances when absolute std.dif. >20 0 0 Average absolute standardized difference 4.31 2.95 Optimal kernel (Tricube 0.2962) Maximum absolute standardized difference 11.30 15.21 Instances when absolute std.dif. >20 0 0 Average absolute standardized difference 3.79 6.71 Optimal local linear (Tricube 0.0591) Maximum absolute standardized difference 13.88 8.08 Instances when absolute std.dif. >20 0 0 Average absolute standardized difference 3.78 2.66
OJT
Other
14.59 0 6.14
29.84 1 12.28
23.02 1 8.34
31.39 1 10.91
14.25 0 6.15
23.61 1 8.95
−14.50 5.44 −1.19 1.88 0.85 −2.97 −2.87 3.09 4.90 −5.08 0.00 −9.40 12.62 −1.42 17.95 2.97 −0.32 10.46 −8.58 7.12 −12.37 −9.02 −2.62
−3.58 8.89 −5.74 −16.95 −1.23 −0.94 0.73 −7.36 14.79 −1.07 −14.04 −21.35 3.51 6.25 −8.79 −10.66 1.79 1.62 8.86 11.07 −4.87 6.58 −6.59
17.95 0 5.98
21.35 1 7.27
21.98 2 5.98
15.90 0 5.27
10.52 0 4.80
19.39 0 7.11
17.22 0 4.71
16.13 0 5.24
Employment effects of the provision of specific professional skills and techniques in Germany Bernd Fitzenberger · Stefan Speckesser
Revised: 15 March 2006 / Published online: 4 September 2006 © Springer-Verlag 2006
Abstract Based on unique administrative data, which has only recently become available, this paper estimates the employment effects of the most important type of public sector sponsored training in Germany, namely the provision of specific professional skills and techniques (SPST). Using the inflows into unemployment for the year 1993, the empirical analysis uses local linear matching based on the estimated propensity score to estimate the average treatment effect on the treated of SPST programs by elapsed duration of unemployment. The empirical results show a negative lock-in effect for the period right after the beginning of the program and significantly positive treatment effects on employment rates of about 10 percentage points and above a year after the beginning of the program. The general pattern of the estimated treatment effects is quite similar for the three time intervals of elapsed unemployment considered. The positive effects tend to persist almost completely until the end of our evaluation period. The positive effects are stronger in West Germany compared to East Germany.
Electronic supplementary material Supplementary material is available in the online version of this article at http://dx.doi.org/10.1007/s00181-006-0088-z and is accessible for authorized users. B. Fitzenberger (B) Department of Economics, Goethe-University, PO Box 11 19 32 (PF 247), 60054 Frankfurt am Main, Germany e-mail:
[email protected] S. Speckesser PSI London and Goethe University Frankfurt, Frankfurt, Germany e-mail:
[email protected]
332
B. Fitzenberger, S. Speckesser
Keywords Training program · Employment effects · Administrative data · Matching JEL Classification C 14 · C 23 · H 43 · J 64 · J 68
1 Introduction Over the last decade, a number of studies has been conducted regarding the effectiveness of further training as part of active labor market policy in Germany.1 Practically all the studies make use of survey data.2 Although these data are rich with respect to informative covariates, most previous evaluation studies suffer from severe shortcomings with respect to the quality of the treatment information and to the precision of the employment history before and after treatment. Often very heterogeneous treatments are summarized in a binary treatment indicator. Also previous studies for Germany do not distinguish treatments by the previous unemployment experience, an issue which is emphasized in the recent literature on timing of events (see Abbring and van den Berg 2003, 2004; Fredriksson and Johansson 2003, 2004; Sianesi 2004). Finally, most evaluation studies only assess the effects of further training in East Germany.3 This evaluation study takes advantage of unique administrative data which involves register data on employment as well as data on unemployment and participation in active labor market programs generated by the Federal Employment Office (Bundesagentur für Arbeit). Our data set merges register data with benefit data and with survey data obtained from the local offices of the Federal Employment Office for participants in further training programs for the period 1980–1997 offering rich information on quite heterogeneous courses: further training (off-the-job) consists of (a) the provision of specific professional skills, (b) complete retraining of the employed to a new formal degree for a different profession, (c) short-term courses which increase the search effectiveness
1 See Speckesser (2004, Chap. 1) and Wunsch (2006, Sect. 6.5) as recent surveys with further
references. 2 Notable exceptions are Bender and Klose (2000) using a earlier version of a subset of our data set and the recent studies of Lechner et al. (2005a, b) which are based on the same data set as our study. In fact, this data set is the outcome of a joint effort to merge administrative data for evaluation purposes, see Bender et al. (2005). The studies of Lechner et al. (2005a, b) and our study differ a lot regarding the exact treatment definition, the choice of valid observations, and the econometric methods used. It is therefore difficult to compare our results with effects found in Lechner et al. (2005a, b), were significant positive results were found for a different types of training. However, assuming that short-term training programs as defined in Lechner et al. correspond at least partially to the type of training analyzed here, both studies suggest positive outcomes of further training in the long-run. 3 See among others Hujer and Wellner (1999), Lechner (1999), Fitzenberger and Prey (2000), and our own survey Fitzenberger and Speckesser (2002).
Professional skills and techniques in Germany
333
of the individuals, and (d) German language courses for immigrants, using a classification developed in this paper. While the previous literature evaluates the employment effects for quite heterogeneous training programs, this paper focuses on quite a specific type of training which is defined by its economic purpose. Based on our classification of training types, we evaluate the employment effects of the most important type, the provision of specific professional skills and techniques (SPST). Traditionally, this type of further training was the dominant type of training for unemployed in Germany (BLK 2000) and, somewhat in contrast to a lively public debate about Active Labor Market Policy in Germany, the training provided under the SPST type changed only little since the 1990s. Therefore, the results reported in this study offer evidence for the long-term effectiveness of policies that are still delivered through the Federal Employment Agency in Germany today and should ideally feed into a debate on the effectiveness of such policies following the recent reforms in Germany. Since the analysis is based on administrative data, this study has to use a nonexperimental evaluation approach. We build on the conditional independence assumption purporting that for the treated and the non-treated the employment outcome in case of non-treatment is the same on average conditional on a set of covariates which cover socio-economic characteristics, the previous employment history of the individuals, the beginning of unemployment, and the elapsed duration of unemployment. In a dynamic setting, one has to take account of the timing of events, see Abbring and van den Berg (2003, 2004), Fredriksson and Johansson (2003, 2004), and Sianesi (2004). Static treatment evaluations run the risk of conditioning on future outcomes leading to possibly biased treatment effects. This is because the nontreated individuals in the data might be observed as nontreated because their treatment starts after the end of the observation period or because they exit unemployment before treatment starts (Fredriksson and Johansson 2003, 2004). We follow Sianesi (2004) and estimate the effects of treatment starting after some unemployment experience against the alternative of not starting treatment at this point of time and waiting longer. Our analysis uses the popular propensity matching approach adjusted to a dynamic setting. Our matching estimator is implemented using local linear matching (Heckman et al. 1998b) with the cross-validation procedure suggested in Bergemann et al. (2004). The remainder of the paper is structured as follows: Sect. 2 gives a short description of the institutional regulation and participation figures for Active Labor Market Policy. Section 3 focuses on the different options of further training, their target groups, and course contents. Section 4 describes the methodological approach to estimate the treatment effects. The empirical results are discussed in Sect. 5. Section 6 concludes. The final appendix provides further information on the data and detailed empirical results. An additional appendix, which is available upon request, provides detailed information on the construction of the data set and further empirical results.
334
B. Fitzenberger, S. Speckesser
2 Basic regulation of further training 2.1 Programs For the period of our data, further training in Germany was regulated on the basis of the Labor Promotion Act (Arbeitsförderungsgesetz, AFG) and was implemented through the German Federal Employment Service (formerly Bundesanstalt für Arbeit, BA). It aimed at improving occupational flexibility, career advancement and the prevention of skill shortages. However, following the persistent unemployment after the 1970s, the programs of further training changed their character from a preventive ALMP rather towards an intervention policy offered to unemployed and those who were at severe risk of becoming unemployed. The increasing number of unemployed entering these programs changed the aims of the programs from the skill-upgrading programs that were focused on the employed to short-term programs in which individuals were taught new technologies and partial enhancement of existing skills for occupational re-integration. Although many changes concerning benefit levels and eligibility were implemented, the traditional policies further training, retraining, and integration subsidy remained unchanged until 1997.4 •
•
•
Further training included the assessment, maintenance and extension of skills, including technical development and career advancement (Weiterbildung). The duration of the courses depended on individual predispositions, other co-financing institutions and adequate courses provided by the training suppliers. Retraining enabled vocational re-orientation if a completed vocational training did not lead to adequate employment (Umschulung). Retraining was supported for a period up to 2 years and aimed a providing a new certified occupational skill. As third program of further training, integrations subsidies (Einarbeitungszuschuss) offered financial aid to employers providing employment to workers who have been unemployed or directly threatened by unemployment. A subsidy was paid for an adjustment period until the sup-
4 Further training consisted of a variety of different types of training provided under the three different programs such as
• • • • • • •
Preparation, social skills and short-term training Provision of specific professional skills and techniques Qualification via the educational system/retraining Training for specific job offers Direct integration in the first labor market Career advancement subsidy Language training.
Further details are discussed in Sect. 3 for the provision of specific professional skills and techniques, the type of training evaluated here. A complete overview of all different types can be found in the appendix.
Professional skills and techniques in Germany
335
ported persons reached full proficiency in their job (up to 50% of the standard wage in the respective occupation). • In 1979, short-term training (Maßnahmen zur Verbesserung der Vermittlungsaussichten) was introduced under §41a AFG aiming to “increase prospects of integration”. This program should offer assessment, orientation and guidance to unemployed. The curricula under this program were usually short-term, lasting from two weeks up to two months and are intended to increase the placement rate of the unemployed. Except for the integration subsidy which offered participants a standard salary (according to union wage contracts), participants were granted income maintenance (Unterhaltsgeld) if the conditions of entitlement were satisfied. To qualify, persons needed to meet the requirement of being previously employed for a minimum duration, i.e. at least 1 year in contributory employment or receipt of unemployment benefit or subsequent unemployment assistance. The income maintenance amounted to 67% of wages for participants with dependent children; otherwise it was equivalent to the unemployment benefit of 60%. However, benefits used to be much higher for the 1980s and early 1990s with up to 80% of previous net earnings granted. If a person did not fulfill the requirement of previous employment, but had received unemployment assistance until the start of the measure, an income maintenance might have been paid as well. While participating in further training, participants regain entitlements for unemployment benefits providing additional incentives to them to participate in programs. The BA bore all the costs of further training incurred directly through the training scheme, especially including course fees.
2.2 Participation Among the three FuU programs, the general further training scheme (Berufliche Weiterbildung) was the most important in both East and West Germany. In 1980, 70% of all the annual total of 232,500 new program participants started a further training scheme, whereas only 14% (32,600) began a program under the Integration subsidy (Eingliederungszuschüsse) scheme. New entrants into retraining summed up to 37,900 (Berufliche Umschulung, about 16% of total). On average, participant stock was about 89,300 in 1980. In 1985, participant entries were 60% higher in total. By then, further training programs amounted to 80% of all participant entries. Between 1980 to 1990, participation increased to 514,600, 74% of these were entries into further training programs. Participation in retraining increased from 37,900 in 1980 to 63,300 in 1990. When labor market policy was extended to East Germany, participation peaked at 887,600 entries in East Germany in 1992 and 574,700 in West Germany, then declined to 378,400 in West Germany and 269,200 in East Germany in 1996. The share of further training increased over time to 77% in West and to 76% in East Germany. The share of participants in retraining was around 20% in West and 18% in East Germany (see Table 1).
336
B. Fitzenberger, S. Speckesser
Table 1 Participation in further training until 1997 (in 1,000 persons) Year
1980 1985 1990 1991 West East 1992 West East 1993 West East 1994 West East 1995 West East 1996 West East
Annual entries
Annual average stocks
Total
Further training
Retraining
Integration subsidy
232.5 371 514.6
162.4 298.2 383.4
37.9 45.1 63.3
32.6 27.7 67.9
89.3 114.9 167.6
540.6 705.3
421.2 442.8
70.5 129.9
48.9 132.6
189 76.7
574.7 887.6
464.5 591
81.5 183.1
28.7 113.5
180.6 292.6
348.1 294.2
266 181.6
72.2 81.5
9.9 31.1
176.8 309.1
306.8 286.9
224.9 199.1
73.1 68.6
8.8 19.2
177.9 217.4
401.6 257.5
309.7 184.3
81.8 52.8
10 26.4
193.3 216.1
378.4 269.2
291.6 204.1
77.3 48.1
9.5 17
203.6 205
Source: Amtliche Nachrichten der Bundesanstalt für Arbeit, several volumes
3 Evaluation based on administrative data 3.1 Integrated administrative data This evaluation study is based on integrated data from various administrative sources: social insurance data for employment, data involving transfer payments during unemployment and survey data for all training participants. •
The core data for this evaluation are drawn from the Employment Subsample (Beschäftigtenstichprobe BST). The BST is a 1% random sample drawn from the mandatory employment register data for all employees who are covered by the social security system over the period 1975–1997. Social insurance contributions are compulsory for dependent employees earning above a minimum wage free of social insurance contributions. However, among the dependent employees specific groups working on a marginal part-time basis and civil servants are excluded. Although these groups are not sampled, the data cover more than 80% of the German labor force. • The second important source apart from the employment information is the benefit payment register (Leistungsempfängerdatei [LED]) of the Federal Employment Service. These data consist of spells for individuals who receive certain benefit payments. Besides unemployment benefit or assistance, these
Professional skills and techniques in Germany
337
data also record very detailed information about income maintenance payments related to the participation in further training schemes. Since the basic sampling results from the employment register, only individuals who experience at least one spell of dependent employment between 1975 and 1997 are sampled. The sampling implies to restrict the analysis to entrants into programs from unemployment who were previously employed because the control group does not allow constructing a non-treatment outcome for treated individuals who did not experience registered unemployment before. The merged employment and benefit data samples roughly 1% of the overall dependent employment and benefit receipt, resulting in 591,627 individuals and in 8,293,879 spells over the period 1975–1997. These data correspond to the IAB Employment Subsample (IABS) used in many empirical studies in Germany. 5 • The third source are participation data collected for all participants in further training (FuU-data). These data provide information about the type of courses, the intended integration objectives and rough information about the contents of the courses with respect to the skills provided. They provide an overview about the persons in FuU-programs, the type of program, the aim of the courses, the type of training (whether the training takes place in classrooms or “on the job”), the provider of the program and the beginning and ending of the treatment and again personal characteristics of the participants (information about sex, age, nationality, the region in which the program takes place, the educational attainment, the employment status before treatment and other important characteristics). The data also indicate the type of income maintenance paid during the participation in a program. The FuU-data cover 54,767 individuals corresponding to 72,983 spells of treatment over the period 1980–1997 (for East Germany 1991–1997 only). In principle, individuals receiving training related benefits that are sampled in the IABS should be part of the FuU-data.6 The three files were merged resulting in an integrated evaluation database consisting of comparable, longitudinal information for treatment and control group as well as information about the type training. Numerous corrections are implemented in order to improve the quality of the data: inconsistencies in both files, which occurred with respect to the reported level of education and occupational status, the year of birth and the family status, were removed. The correction of the variable providing information on the level of schooling and professional education is especially important for this study, because we assume the individual skills to be the decisive reason for an assignment into treatment. As the information on the individual’s vocational training is provided by the 5 However, the scientific use-file of the IABS does not report the receipt of benefit or benefits paid for training if employment is observed simultaneously (Bender et al. 2000). Consequently, we re-merged the scientific use-file to the original benefit data a second time in order to avoid resulting underreporting of training participation. 6 However, there are exceptions to this rule: since we find participants without any payment of income maintenance, using the merged data is the only option to fully identify the treatment group.
338
B. Fitzenberger, S. Speckesser
employers, we assume that this reflects rather the level of education necessary to fulfill the tasks in the individual’s current job. The individual’s formal skill level may very well lie above the reported education level by the employer. A detailed description of the correction can be found in Bender et al. (2005, Chap. 3). 3.2 Identification of different types of training Since the basic regulation of further training provides only a basic framework, quite dissimilar treatments can be implemented under the same regulation (e.g. training for career advancement or short-term courses for very long-term unemployed are both reported as “further vocational training”). Earlier descriptive studies7 on the types of treatment did not distinguish treatments providing basic social skills from treatments offering certified professional skills, albeit these different options are supposed to influence jobsearch in very different ways. The merged data of this study allow the identification of specific types of further training while earlier papers usually evaluated bundles of very heterogeneous types of treatment. Our data permits the distinction of treatments provided outside a firm specific labor market from those within a firm, whether the course was general training or occupational specific, full or part-time. In order to identify participants in similar types of training, we exploit all available information including occupational status while being on training, the type of benefit and a variable recording the type of training in FuU-data.8 The combination of these different sources allows for an identification of informative (and coherent) types of treatment applying a typology relying on the type of training from FuU-data (see Bender et al. 2005, Chap. 2.3 for a description of the FuU-data) and the closeness to the demands of the labor market as indicated by the IABS–data on employment status. Especially important are employment status and program information: While the program information “further vocational training” might comprise both employed and unemployed participants, the employment status allows additionally to identify the 7 One of these studies based on the reported FuU-data by Blasche and Nagel (1995) distinguished
whether the training was carried out as an adjustment or a retraining and whether it was a full-time or part-time treatment. 8 The training data should actually be sufficient to identify the extent of further training since they should have been collected for all training spells started under the AFG. However, there are two reasons which do not permit to rely only on the variable of type of training from the FuU-data: First, training data are incomplete because data collection was not related to benefit payments. In such cases, administrative data are usually incomplete and the benefit information is required to identify the full extent of participation in the program. Secondly—and equally important—the use of employment data and benefit data increases the precision of information on the type of training: It allows to find out whether a person was employed while being participant or whether a specific benefit was paid, both offering additionally valuable information about the participant’s type of treatment.
Professional skills and techniques in Germany
339
target group (“reintegration” for specific groups or unemployed or “career advancement” for employees) or to indicate how close the program is related to an internal labor market. To summarize: Based on integrated evaluation, we are allowed to identify coherent types of training ranging from the provision of social skills and basic general training over the provision of specific skills, integration into firm specific labor markets, retraining and the promotion of certified occupations up to career advancement training that used to be supplied to persons without the risk of unemployment. The full range of training provided under the further training regulation can be found in the appendix. In this paper, we focus on the most important type of training providing SPST.
3.3 Specific professional skills and techniques (SPST) This type of further training intends to improve the starting position for finding a new job by providing additional skills and specific professional knowledge in courses usually lasting between four weeks and one year. It involves freshening up specific skills, e.g. computer skills, or training on new operational methods. SPST is targeted at unemployed persons or persons at risk of becoming unemployed in order to facilitate integration into full employment. It mainly consists of classroom training and the acquisition of professional knowledge by working experience is provided in most programs. Participants usually obtain a certificate about the contents of the course, signaling refreshed or newly acquired skills and the amount of theory and workexperience achieved. Such a certificate sets an additional signal for potential employers and is supposed to increase the matching probability since the provision of up to date skills and techniques is considered to be a strong signal in the search process. This type of training was the most important type of training for the unemployment cohort used here (see descriptive statistics in Sect. 3.4 based on our sample of unemployment inflows in 1993) and—as survey data for training in 2000 reveals—still is the most important with 36% of all cases and 35% of the volume (hours × cases, Table 2). Together with the similar type of “other course” usually providing limited occupational knowledge as well, 67% of all cases in West Germany and 68% of the total volume provided specific professional skills and techniques. These data also show the relatively smaller role of provision of specific skills and techniques in East Germany, where long-term retraining programs are still the most important form of training with 29% of the total volume of training and 20% of all courses. However, “other courses” (20% of the total volume) and specific professional skills (29%) are very important, too. In light of the recent data on course contents, we believe that our evaluation of the program SPST is of particular interest for policy makers because this program is still the most important type of training today. Our evaluation using data for the 1990’s should therefore be regarded as a highly policy relevant
340
B. Fitzenberger, S. Speckesser
Table 2 Type of further vocational training in East and West Germany, 2000 % Share of
Type of course or content of further training Retraining Promotion Integration Specific skills Other course No information Sum
Participants
Volume of hours
West
East
West
East
3 10 18 36 31 1 99
9 6 17 35 33 0 100
20 22 15 21 21 0 99
29 7 15 29 20 0 100
Source: BLK (2000): p. 272 Note: The classification differs slightly from the classification of further training applied in out analysis as information is based on a survey of training providers rather than social insurance data
contribution, providing long-term evidence for treatment effects in programs that are most similar to contemporary policies in place. Besides, we also expect SPST to be the most important type of training in future planning of further vocational training, see for example the recent report by the Federal Commission for Education Planning and Research, which stresses the importance of additional qualifications/complementary specific skills (BLK 2000, 3). 3.4 Inflow sample into unemployment and participation by type of training We focus on the effect of training programs on employment chances of unemployed individuals. Therefore, we base our subsequent empirical analysis on an inflow sample into unemployment. We use the inflows into unemployment in the year 1993 both for East and West Germany and we estimate the effect of SPST on future employment rates. To be precise, we use individuals who experience a transition from employment to nonemployment and for whom a spell benefits transfer payments from the BA starts in the year 1993 before these unemployed individuals possibly find a new job. In the following, we denote the start of the benefit spell as the beginning of the unemployment spell. We condition on benefit recipiency to omit most individuals who move out of the labor force after losing their jobs. We choose the year 1993 because this is the second year observable for East Germany such that we can control for one year of labor experience before the beginning of unemployment. Our data allow to follow individuals until December 1997. Participation in provision of specific professional skills and techniques and other types of training can be identified by either LED-data or FuU-data. In the best case, both sources provide coherent information about the treatment and one can easily identify the type of treatment from both data sources. However, due to quality deficiencies in the participation data, many participants might not be recorded in the FuU-data. In this case, the LED-data helps
Professional skills and techniques in Germany
341
to identify the treatment on the basis of the benefit variable which itself offers very specific information about the treatment. In other cases, we observe individual records showing employment in the IABS information and at the same time training in the FuU-data. This is for example the case if the treatment takes place in a firm and individuals are paid a normal salary (e.g. integration subsidy) or if individuals are prepared for precise job offers. Since we have two separate sources of data, we make use of all available information and combine benefit information with participation data in order to identify all different types of training.9 Table 3 provides information about the size of the inflow samples and the distribution of training. We only consider the three types of training programs, which are most suitable for unemployed individuals and which do not involve on-the-job training (training while working in a job). These are (i) provision of specific professional skills (SPST), (ii) preparation, social skills and short-term training (PST), and (iii) qualification via the educational system and retraining (RT). The total inflow sample for West Germany comprises 18,775 spells and 9,920 spells for East Germany. There are 1,500 training spells for West Germany and 1,656 for East Germany. Among these, SPST represents by far the largest type of training with 895 SPST spells in West Germany and 1,086 SPST spells in East Germany. Almost one fourth of all training spells involve RT and PST represents the smallest group both in West and East Germany. This paper focuses on SPST as the largest training program among the unemployed both in East and West Germany. In 1993, about 5% of all unemployed in West Germany and more than 10% in East Germany participate in such a training program.
4 Evaluation approach We analyze the employment effects of the provision of SPST. Specifically, we estimate the average treatment effect on the treated (TT), i.e. the differential impact the treatment shows for those individuals who participate in an SPST course. We take the 1993 inflow sample into unemployment. Extending the static binary treatment framework to a dynamic setting, we distinguish three types of treatment depending upon the month in which the SPST course starts relative to the elapsed unemployment duration. We estimate the TT for participation in SPST against the comprehensive alternative nonparticipation in SPST which includes participation in another program of active labor market policy. To assess the sensitivity of the results with respect to participation in other training programs, we also estimate the TT for participation in SPST against the alternative of no participation in any of the training programs considered in this paper. Our dynamic evaluation approach following Sianesi (2004) applies 9 The additional appendix describes in detail, which variables were required for this. It provides
also the precise coding plan. Table 3 in Sect. 3 shows that many treatments would not have been detected or would have been coded differently, if we could not have used the combined information from both benefit and participation data.
342
B. Fitzenberger, S. Speckesser
Table 3 Participation in first training program for 1993 inflow sample into unemployment— Program starts before a new job is found Training programa
Frequency
West Germany Provision of specific professional skills Preparation, social skills and short-term training Integration via education system/retraining No training program above Total inflow sample East Germany Provision of specific professional skills Preparation, social skills and short-term training Integration via education system/retraining No training program above Total inflow sample
Percent of inflow sample
Percent among treated
895 250
4.8 1.3
59.7 16.7
355
1.9
23.7
17,275 18,775
92.0 100
1,086 172
10.9 1.7
65.6 10.4
398
4.0
24.0
8,264 9,920
83.4 100
– 100
– 100
a We exclude training programs which involve on-the-job training (training for specific jobs and direct integration/wage subsidy) or which involve a very small number of participants since they are not targetted on inflows into unemployment (career advancement and language training)
the standard static binary treatment approach recursively depending on the elapsed unemployment duration. In the following, we first discuss our extension of the standard binary treatment approach to a dynamic setting. Then, we describe the implementation of the matching estimator for our problem. 4.1 Extending the static binary treatment approach to a dynamic setting Our empirical analysis is based upon the potential–outcome–approach to causality (Roy 1951; Rubin 1974), see the survey Heckman et al. (1999). We estimate the TT in the binary treatment case.10 The individual treatment effect is the difference between the treatment outcome Y 1 and the nontreatment outcome Y 0 , where the latter is not observed for the treated individuals. In a static context, TT is given by = E(Y 1 |D = 1) − E(Y 0 |D = 1) ,
(1)
where D denotes the treatment dummy. 10 The framework can be extended to allow for multiple, exclusive treatments. Lechner (2001) and
Imbens (2000) show how to extend standard propensity score matching estimators for this purpose. These results also justify our bivariate estimation of the TT for participation in SPST against the alternative of no participation in any training program.
Professional skills and techniques in Germany
343
We use the static binary treatment framework in a dynamic context. Our basic sample consists of individuals who start an unemployment spell with transfer payments in 1993 and who had been employed before. These individuals can participate in an SPST program at different points of time in their unemployment spell. Both the type of treatment and the selectivity of the treated individuals may depend upon the exact starting date of the program. Abbring and van den Berg (2003) and Fredriksson and Johansson (2003, 2004) interpret the start of the program as an additional random variable in the “timing of events”. Unemployed individuals are not observed to participate in a program either because their participation takes place after the end of the observation period or because they leave the state of unemployment either by finding a job or by moving out of labor force. Abbring and van den Berg (2003) consider a joint duration model for the length of unemployment and the time until treatment starts. The latter time is right censored when an individual exits from unemployment before treatment starts. For a mixed proportional hazard model, Abbring and van den Berg show that one can nonparametrically identify the causal effect of treatment on the hazard rate from unemployment based on single spell data. Identification is based to the randomness of time until treatment. The estimation approach implies that the timing of events involves useful information for the estimation of the treatment effect. The approach requires the specification of a joint continuous time model for the duration of unemployment and the time until treatment. We do not pursue this approach for three reasons: First, the discrete nature of our data makes a specification of a continuous time model difficult. Second, identification of the model critically relies on a very strict no-anticipation condition in continuous time, implying that individuals do not anticipate treatment even shortly before it start. Third, we do not think that our data allow us to model explicitely the duration of unemployment in a mixed proportional hazard model. Fredriksson and Johansson (2003, 2004) argue that it is incorrect to undertake a static evaluation analysis by assigning unemployed individuals to a treatment group and a nontreatment group based on the treatment information observed in the data. Consider the case of analyzing treatment irrespective of the actual starting date during the unemployment spell. If one assigns individuals to the control group who find a job later during the observation period, one effectively conditions on future outcomes when defining the treatment indicator. This might lead to a downward bias in the estimated treatment effect, which is the bias emphasized by Fredriksson and Johansson (2003, 2004). An upward bias can arise as well when future participants, whose participation starts after the end of the observation period, are assigned to the control group. Using duration analysis in discrete time, Fredriksson and Johansson (2004) suggest a matching estimator for the treatment effect based on a time-varying treatment indicator. Treatment can only start at discrete points of time. In a similar vein, Sianesi (2004) argues for Sweden that all unemployed individuals are potential future participants in active labor market programs, a view which is particu-
344
B. Fitzenberger, S. Speckesser
larly plausible for countries with comprehensive systems of active labor market policies like Sweden or Germany. The above discussion implies that a purely static evaluation of SPST programs is not warranted.11 Following Sianesi (2004), we extend the static framework presented above in the following way. We analyze the employment effects of the first SPST program participation during the unemployment spell considered.12 We do not follow Fredriksson and Johansson (2004) in estimating hazard rates to employment and survival in unemployment because we are interested in the total employment effects irrespective of multiple transitions between employment and nonemployment. We distinguish between treatment starting during months 1 to 6 of the unemployment spell, treatment starting during months 7 to 12, and treatment starting during months 13 to 24. By using the three time windows, the problem of conditioning on future outcomes is strongly reduced. However, we still condition to some extent on future outcomes during the time window. Because our data end in 1997, we do not analyze treatments starting later than month 24. We estimate the probability of treatment given that unemployment lasts long enough to make an individual ‘eligible’. For the treatment during months 1 to 6, we take the total sample of unemployed to estimate the propensity score. The nontreatment group includes the unemployed who either never participate in SPST or who start treatment after month 6. For the treatment during months 7 to 12 or months 13 to 24, the basic sample consists of those unemployed who are still unemployed in the first month of the period considered, i.e. in months 7 and 13, respectively. For estimating the propensity score for treatment during the considered time interval, we use all individuals who are still unemployed in the first month of the period. Sianesi (2004) estimates a separate Probit for different starting dates of unemployment and separate starting dates of the programs. In our case, the number of observations is too small for this. However, even if enough data were available, we think that it would not be advisable to estimate monthly Probits. The reason is that the starting date of the treatment is somewhat random (relative to the elapsed duration of the unemployment spell) due to available programs starting only at certain calendar dates. Therefore, we pool the treatment Probit for all inflows into unemployment in the three treatment periods assuming that the exact starting date is random within the time interval considered. However, when matching treated and non–treated individuals, we impose perfect
11 Under certain assumptions, drawing random starting times of the program is a valid alternative to use in this context, see e.g. Lechner (1999) and Lechner et al. (2005a, b) for this approach. However, this does not overcome all of the problems discussed here and we prefer to consider the timing of events explicitely. We do not introduce a random timing of the program starts among the nonparticipants for the following three reasons. First, random starting dates add noise to the data. Second, the drawn starting time might be impossible in the actual situation of the nontreated individual. Third, drawing random starting dates does not take the timing of events seriously. 12 We do not analyze multiple sequential treatments, see Bergemann et al. (2004), Lechner and Miquel (2001), and Lechner (2004).
Professional skills and techniques in Germany
345
alignment in the starting month of the unemployment spell and the elapsed unemployment duration at the start of the program. In the next step, we implement a stratified matching approach. First, we match participants and nonparticipants whose unemployment period starts in the same calendar month. A second requirement is that the nonparticipants are still unemployed in the month before the treatment starts. This way, we only match nonparticipants who might have started a treatment in the same month as the participants. The expression for the nontreatment outcome for the participants is then obtained through a local linear regression on the estimated propensity score among this narrow set of nonparticipants matched to the participants. This way, we obtain a perfect alignment in calendar time thus avoiding drawing random starting times of the program. Our estimated TT parameter has to be interpreted in a dynamic context. We analyze treatment conditional upon the unemployment spell lasting at least until the start of the treatment and this being the first SPST treatment during the unemployment spell considered. Therefore, the estimated treatment parameter is (similar to (1) in Sianesi 2004) (t, τ ) = E(Yτ1(t) |Dt = 1, U ≥ t − 1, D1 = · · · = Dt−1 = 0) −E(Yτ0(t) |Dt = 1, U ≥ t − 1, D1 = · · · = Dt−1 = 0),
(2)
where Dt is the treatment dummy for treatment starting in month t of unemploy1(t) 0(t) ment, Yτ , Yτ are the treatment and nontreatment outcomes, respectively, in periods t + τ − 1, τ = 1, 2, . . . counts the months (plus one) since the beginning of treatment in period t, and U is the duration of unemployment.13 Note that 1(s) 0(s) Yτ = Yτ for j = 0, 1 and s = t because potential outcomes are specific to the beginning of treatment. Conditioning on past treatment decisions and outcomes, the treatment parameter for a later treatment period is not invariant with respect to changes in the determinants of the exit rates from unemployment or the treatment propensity in the earlier phase of the unemployment spell. This is a direct consequence of modelling heterogeneity with respect to the starting time of the treatment relative to the length of elapsed unemployment. Both the treatment group and the group of nonparticipants at the start of the treatment are affected by the dynamic sorting effects taking place before, see Abbring and van den Berg (2004) for a recent discussion of this problem in the context of estimating duration models. Thus, the estimated treatment parameter depends dynamically on treatment decision and outcomes in the past when taking the timing 13 In contrast, Sianesi (2004) conditions on being unemployed in period t, i.e. U ≥ t. In our data, treatment in month t can start at any day during the month and the monthly employment status is defined by the status for the majority of days in the month. Our data do not allow us to pin down the exact day when treatment starts, see Sect. 3. We use the restriction U ≥ t − 1 defining eligibility for treatment assuming that the assignment to treatment can occur up to 1 month before beginning of treatment. An unemployed in t − 1 might anticipate obtaining a job in t. For this reason, our estimated treatment effect might be conservatively downward biased.
346
B. Fitzenberger, S. Speckesser
of events seriously (Abbring and van den Berg 2003; Fredriksson and Johansson 2003; Sianesi 2004). To avoid this problem, one often assumes a constant treatment effect over the duration of elapsed unemployment at the program start. Alternatively, other suitable uniformity or homogeneity assumptions for the treatment effect could be used. Such assumptions are not attractive in our context. Using propensity score matching in a stratified manner, we estimate the treatment parameter in (2) allowing for heterogeneity in the individual treatment effects and for an interaction of the individual treatment effects with dynamic sorting taking place. To make this a valid exercize, we assume the following dynamic version of the conditional mean independence assumption (DCIA) to hold for our inflow sample into unemployment E(Yτ0(t) |Dt = 1, U ≥ t − 1, D1 = · · · = Dt−1 = 0, X) = E(Yτ0(t) |Dt = 0, U ≥ t − 1, D1 = · · · = Dt−1 = 0, X) ,
(3)
where X are time-invariant (during the unemployment spell) characteristics 0(t) and Yτ is the nontreatment outcome in periods τ ≥ 1 after beginning of treatment (see also Sianesi 2004, p. 137, for a similar discussion). We effectively assume that conditional on X, conditional on being unemployed until period t − 1, and conditional on not receiving treatment before t treated and nontreated individuals (both referring to treatment in period t) are comparable in their nontreatment outcomes in period t and later. The treatment parameter in (2) is interesting when each time period one decides whether to start treatment in the next month or whether to postpone possible treatment to the future (treatment now versus wating, see Sianesi 2004). In addition, exits from unemployment in a certain period are not known in the period until they take place. Anticipation effects might invalidate this analysis, when the actual job arrival or the actual treatment is known some time beforehand. The former might introduce a downward bias in the estimated treatment effect while the latter might introduce an upward bias. This is a problem in any of the analyses based on the timing-of-events approach. However, it will not be a problem, if individuals anticipate the chances or the determinants of one of these events as long as this occurs in the same way for treated and nontreated individuals conditional on X and the duration of elapsed unemployment in t. By construction, treated individuals and their nontreated counterparts serving as controls exhibit the same unemployment duration until the beginning of the treatment. We investigate whether the employment history has been balanced by the propensity score matching for a period of 12 months before the beginning of the unemployment spell. Finishing this section, one might be interested in knowing how our estimated treatment parameter in (2) relates to the static TT in (1), which is typically estimated in the literature. To relate the static TT to our dynamic setup, we define the treatment dummy D = T t=1 Dt · I(U ≥ t − 1) indicating whether treatment starts during the time interval [1, T]. The outcome variables (Y 0 , Y 1 )
Professional skills and techniques in Germany
347
in (1) refer to the post treatment outcomes (Y˜ τ0 , Yτ1 ) after the beginning of the treatment. Then, we have E(Yτ1 |D = 1) − E(Y˜ τ0 |D = 1) =
T [E(Yτ1(t) |Dt = 1) − E(Y˜ τ0 |Dt = 1)] · P(Dt = 1/D = 1) , (4) t=1
1(t) ˜0 where Yτ1 = T t=1 Yτ · I(Dt = 1) · I(U ≥ t − 1) and Yτ represents the nontreatment outcome, either in employment or in unemployment, conditioning on no further treatment in the future (during [1, T]). Thus, E(Yτ1 |Dt = 1)−E(Y˜ τ0 |Dt = 1) can not be related easily to (t, τ ), since (t, τ ) allows for the possibility of future treatment.14 Estimation of the different parameters has to account for different selection effects. However, in our application, the group of treated individuals is quite small relative to the nontreatment group. Therefore, the static TT is likely to be close to the weighted average of the dynamic TTs (t, τ ) with weights P(Dt = 1/D = 1) as in (4). It is not possible to sign the difference because our estimates for (t, τ ) change sign with τ (see next section). 4.2 Details of the matching approach Estimating the TT requires estimating the expected nontreatment outcome for the treated individuals. This estimation of the counterfactual is based upon the observed outcomes of the nontreated individuals.15 For this, we use a matching approach (Rosenbaum and Rubin 1983; Heckman et al. 1998a; Heckman et al. 1999; Lechner 1999) based on the estimated dynamic propensity score, as described in the previous section. We apply local linear matching to estimate the average nontreatment outcome of the treated individuals. Effectively, we run a nonparametric local linear kernel regression (Heckman et al. 1998b; Pagan and Ullah 1999; Bergemann et al. 2004) which can be represented by a weight function wN0 (i, j) that gives the higher weight to nonparticipant j the stronger his similarity to participant i regarding the estimated propensity score. The estimated TT can be written as ⎧ ⎫ ⎬ 1 ⎨ 1(t) 0(t) wN0 (i, j) Yj,τ Yi,τ − , (5) ⎩ ⎭ N1 i∈{D=1}
j∈{D=0,uej =uei }
with N0 being the number of nonparticipants j still unemployed right before treatment starts in t, N1 being the number of participants i in treatment depend14 See Fredriksson and Johansson (2004) for a similar discussion. 15 Note that non-participants encompass all non-participants in SPST including participants in
alternative treatments. This represents the choice situation for the selection into the program, which is a choice between SPST or the alternative of continuing unemployment or other treatment. As discussed in Sect. 3, the integration targets differ by type of further training.
348
B. Fitzenberger, S. Speckesser
ing on elapsed unemployment, and uei , uej being the calendar month of the 1(t) 0(t) beginning of the unemployment spell i, j, respectively. Yi,τ and Yj,τ are the outcomes in the same calendar month. Matching estimators differ with respect to the weights attached to members of the comparison group. The most popular approach in the literature is nearest neighbor matching using the outcome of the closest nonparticipant (j(i)) as the comparison level for participant i (Heckman et al. 1999; Lechner 1999). In this case, wN0 (i, nn(i)) = 1 for the nearest neighbor nn(i)—as long as it is unique— and wN0 (i, j) = 0 for all other nonparticipants j = nn(i). We use local linear matching, where the weights are implied by a nonparametric local linear kernel regression of the nontreatment outcome on the estimated propensity score.16 This has a number of advantages compared to nearest neighbor matching. The asymptotic properties of kernel based methods are straightforward to analyze and bootstrapping provides a consistent estimator of the sampling variability of the estimator in (5) even if matching is based on closeness in generated variables (this is the case with the popular method of propensity score matching which will be discussed below), see Heckman et al. (1998a, b), or Ichimura and Linton (2001) for an asymptotic analysis of kernel based treatment estimators.17 In contrast, Abadie and Imbens (2004) show that the bootstrap is in general not valid for nearest neighbor matching due to its extreme nonsmoothness. For the local linear kernel regression in the sample of nonparticipants, we use the Gaussian kernel, see Pagan and Ullah (1999). Standard bandwidth choices (e.g. rules of thumb) for pointwise estimation are not advisable here since the estimation of the treatment effect is based on the average expected nonparticipation outcome for the group of participants, possibly after conditioning on some information to capture the heterogeneity of treatment effects. To choose the bandwidth, we use the leave-one-out cross-validation procedure suggested in Bergemann et al. (2004) mimicking the estimation of the average expected nonparticipation outcome for each period. First, for each participant i, we identify the nearest neighbor nn(i) in the sample of nonparticipants, i.e. the nonparticipant whose propensity score is closest to that of i. Second, we choose the bandwidth to minimize the sum of the period-wise squared prediction errors T 0 +35 t=T0
⎡
⎛ N1,t 1 ⎣ ⎝Y 0 nn(i),t − N1,t i=1
⎞⎤2 0 ⎠⎦ wi,j Yj,t
(6)
j∈{D=0,uej =uenn(i) }\nn(i)
16 The local linear regression (see Heckman et al. 1998b, for a seminal application in the estimation of treatment effects) involves estimating a weighted linear regression of the outcome variable on an intercept and the difference in the propensity score between the treated individual of interest and the nontreated. This is a nonparametric regression. The weights are kernel weights also in the propensity score difference. The estimated intercept is the estimated counterfactual for the treated individual considered. The values of the weights and of the regressor differ by treated individual. 17 Heckman et al. (1998a, b) discuss the asymptotic distribution of estimated treatment effects based on local linear matching taking account of the sampling variability of the estimated propensity score. This asymptotic result justifies the application of bootstrapping.
Professional skills and techniques in Germany
349
where the estimation of the employment status for nn(i) is not based on the nearest neighbor nn(i), T0 = 1, 7, 13 is the first calendar month in the interval for unemployment duration (1–6, 7–12, 13–24) during which the treatment 0 0 are outcomes in month t of the unemployment spell. and Yj,t begins, and Ynn(i),t For the local linear regression, we only use those unemployment spells starting in the same month as for nn(i). The optimal bandwidth affecting the weights wi,j through the local linear regression is determined by a one-dimensional search.18 The resulting bandwidth is sometimes larger and sometimes smaller than a rule-of-thumb value for pointwise estimation, see Ichimura and Linton (2001) for similar evidence in small samples based on simulated data. We take account of the sampling variability in the estimated propensity score by bootstrapping the standard errors of the estimated treatment effects. To account for autocorrelation over time, we use the entire time path for each individual as block resampling unit. All the bootstrap results reported in this paper are based on 500 resamples. Since the bandwidth choice in (6) is computationally very expensive, the sample bandwidth is used in all resamples.
5 Empirical results 5.1 Descriptive evidence on SPST training spells Our empirical analysis is performed separately for West and East Germany. We restrict the data to the 25 to 55 years old in order to rule out periods of formal education or vocational training as well as early retirement. The analysis is based on the inflows from employment into unemployments which are associated with the start of a transfer payment by the Federal Labor Office during the year 1993. We observe 12,320 such spells in West Germany and 7,297 in East Germany. The analysis is based on spells, i.e. the sample involves more than one spell for individuals for whom we observe multiple unemployment spells with transfer payments in 1993 and short employment spells between. An SPST treatment is associated with an unemployment spell if the individual does not start employment before the beginning of the treatment occurs. Therefore, in cases with multiple unemployment spells, a treatment after the beginning of the second unemployment spell is only recorded for the second unemployment spell but not for the first one. For the first unemployment spell we record no treatment and the outcome is set to not employed during the second unemployment spell and while receiving treatment. Note that the first spell of the same individual can not serve as a comparison observation for the treatment during the second spell because of the perfect alignment in calendar time when estimating the TT in equation (2). 18 The bandwidth values for the Gaussian kernel obtained when minimizing (6) are for West Germany 0.5739 (1–6), 0.1809 (7–12), 0.1812 (13–24) and for East Germany 0.0503 (1–6), 1.601 (7–12), 0.3768 (13–24).
350
B. Fitzenberger, S. Speckesser
Table 4 shows the number of unemployment spells with SPST treatment depending on the elapsed duration of unemployment. There are 751 treatment spells in West Germany and 971 in East Germany. Among these, 171 in West Germany and 217 in East Germany start during the first six months of unemployment, 147 and 227, respectively, during months 7 to 12, 260 and 373, respectively, during the second year of unemployment, and 173 and 154, after two years of unemployment. SPST programs tend to start on average after a slightly longer elapsed duration of unemployment in West Germany compared to East Germany. Table 5 contains descriptive information on the starting dates. The average starting date is 16.6 months for West Germany and 15.1 months for East Germany. Considering the evidence for the three quartiles, the difference in the average arises mainly from the upper part of the distribution, i.e. the late starting dates in West Germany are later than in East Germany. Since the data for our analysis end in December 1997 and we analyze the employment outcome during 36 months after the beginning of the treatment, we only consider treatments starting during the first 24 months of unemployment. Table 5 provides descriptive information on the duration of training spells. In East Germany, durations are longer compared to West Germany. The average duration is about 2.4 months higher and the difference is slightly higher in the upper part of the distribution (4 months at the upper quartile) compared to the lower part of the distribution (2 months at the lower quartile).
Table 4 Number of SPST training spells
Training starts during 1–6 months 7–12 months 13–24 months >24 months of unemployment Total
West Germany
East Germany
171 147 260 173
217 227 373 154
751
971
Table 5 Descriptive statistics on SPST training spells West Germany
East Germany
Elapsed duration of unemployment in months at beginning of training spell Average 16.6 15.1 25%-quantile 7 7 Median 14 13 75%-quantile 23 21 Duration of training spell in months Average 6.4 8.8 25%-quantile 3 5 Median 6 9 75%-quantile 8 12
Professional skills and techniques in Germany
351
5.2 Estimation of propensity score Even rich administrative data can be less informative than survey data specifically collected for evaluation purposes. Nevertheless, the most important variables affecting program participation are available in our data. As Sianesi (2004), we argue that the participation probability depends upon the variables determining re-employment prospects once unemployment began. Consequently, all individuals are considered who have left employment in the same calendar year and who have experienced the same unemployment duration before program participation or non-participation. Following Sianesi (2004), the elapsed duration of unemployment should capture the important unobservables with respect to an individuals’ changing employment probabilities over the curse of the unemployment.19 We additionally control for seasonal effects by including the months when unemployment began. Besides, additional observable characteristics have been included in the propensity score estimation: Age, ethnicity and the level of accomplished vocational training are most important determinants for the participation in training. We also include the industry and the occupational status of previous employment as the sectoral development is very important for the re-employment probability. Especially sectors in decline and outflows from manual occupations are supposed to influence the individual’s chances of re-employment and the decision to participate in training. As there are very important differences in regional implementation and the delivery of the programme, we also include dummy variables for the different Länder and for the agglomeration type of the region. Finally, we include the level of previous earnings and some information about the previous unemployment experiences as important co-variates determining the level and duration of the unemployment benefits. We use three variables containing information on earnings. Due to reporting errors and censoring problems, we do not know the earnings for all observations and we distinguish three cases. ‘Positive Earnings reported’ is a dummy variable for earnings above the minimum level to be subject to social security taxation.20 ‘Earnings cens.’ is a dummy variable for earnings being topcoded at the social security taxation threshold (Beitragsbemessungsgrenze). ‘Log earnings’ is log daily earnings in the range between 15 Euro and the topcoding threshold and zero otherwise.
19 As mentioned before, conditioning on being unemployed in the month before treatment starts could be problematic in the presence of anticipation effects. If anything, this should induce a downward bias into our estimated treatment effect. However, the medium- and long-run effects on the level of the employment rate are likely to be negligible. 20 In 1992, montly earnings below DEM 500 in West Germany and DEM 300 in East Germany for marginal part-time employees (geringfügig Beschäftigte) were not subject to social security taxation and should therefore not be present in the data. In addition, it was possible to earn at most twice as much in at most 2 months of the year. Probably due to recording errors, the data shows a number of employment reports with zero or very low earnings. Since this information is not reliable, we only use the information for daily earnings reported above 15 Euro as a conservative cut-off point.
352
B. Fitzenberger, S. Speckesser
We argue that the dynamic conditional independence assumption in (3) is likely to hold for the covariates used in our specification of the propensity score because these variables are likely to capture selection into the program and they are predetermined at the beginning of the unemployment spell. To estimate the propensity score, we obtain Probit estimates for SPST training starting during the three time intervals for elapsed unemployment duration, i.e. 1–6 months (TR16), 7–12 months (TR712), and 13–24 months (TR1324). Tables 6 and 7 report our preferred specifications for West and East Germany, which are obtained after extensive specification testing. Our specification search starts with using all the covariates mentioned above without interactions. Then those covariates are dropped for which the Probit estimator cannot be obtained due to perfect predictions for certain values of the covariates.21 Perfect prediction only involves a small number of cases where the estimated treatment probility is exactly zero. This is likely to be a small sample problem, since it only affects dummy variables comprising very small groups of individuals. The estimated probabilities for treatment are always strictly below one, thus for the estimation of the average effect of treatment on the treated (TT), there is never a problem to find a non-treated individual with a close propensity score, see also our subsequent sensitivity analysis in Sect. 5.4. For the variables state, firm size, regional agglomeration, and industry information, we test whether the dummy variables are jointly significant. When insignificance is found, the covariates are dropped. Finally, we test for the significance of interaction effects of gender and age with a number of covariates. Only the significant effects remain in the specification and we did not find inconsistent test results regarding the sequence of tests performed. Finally, we investigate the goodness-of-fit for fairly narrow cells of observations based on the observed covariates. The predicted probabilities for our final preferred specification are in close correspondence to their empirical counterparts and simple goodness-of-fit tests show no rejection (detailed results are available upon request). The results for the Probit estimates in Tables 6 and 7 show that the final specifications differ between the three time intervals and between West and East. Age effects are not significant in most cases except for TR1324 in West Germany. Firm size and industry are important for all treatment types in East Germany but only for early SPST programs (TR16 and TR712) in West Germany. For some covariates, the signs of the effects differ by treatment type, e.g. WZW5 (Construction) in East Germany seems to be associated with a later start of treatment. Remarkable regional differences exist in treatment assignment by states, especially in East Germany. Unemployed coming from large firms seem to be more likely to receive treatment. More highly educated individuals are more likely to receive early treatment in East Germany (especially at older ages for TR1324) and West Germany, with the exception of TR16 in West Germany. Foreigners are less likely to receive treatment (in East Germany, this 21 Such a situation would contradict the assumption required for propensity score matching that the treatment probability has to lie strictly between zero and one.
353
Professional skills and techniques in Germany Table 6 Probit estimates SPST West Germany Training starts during 1–6 months of unemployment
7–12 months
13–24 months
Regressor
Coef.
SE
Coef.
SE
Coef.
SE
Intercept
−3.9682
1.9912
−12.3984
2.3556
−5.6063
1.9325
Age: below 30 is omitted category Age 30–34 0.0396 Age 35–39 −0.0932 Age 40–44 −0.0805 Age 45–49 0.0164 Age 50–55 −0.1933
0.0895 0.2404 0.2482 0.2470 0.2477
0.1103 0.0705 0.0096 −0.0802 −0.5813
0.0970 0.1115 0.1177 0.1332 0.1606
0.1008 0.0799 −0.5129 −0.5481 −0.9703
0.0892 0.2154 0.2629 0.2854 0.2894
Industry: agriculture/basic materials is omitted category Metal/electronics 0.2134 0.1317 0.1366 Light industry −0.0638 0.1679 0.2645 Construction 0.1048 0.1643 −0.0995 Prod. oriented service 0.1607 0.1283 0.0910 Consumption service/state −0.0385 0.1373 −0.1407
0.1426 0.1562 0.1886 0.1379 0.1545
Occupational status: part-time is omitted category Apprentice −0.0691 0.2962 Blue collar −0.1904 0.1747 White collar 0.0966 0.1744
0.0650 0.0810
0.2989 0.3077
−0.0287 −0.0018
0.1458 0.1570
Level of education: no vocational degree omitted category Vocational training 0.3092 College/university degree 0.4842
0.1502 0.2241
−0.1145 0.1372
0.0920 0.1577
Land: Northrhine-Westphalia is omitted category Schleswig-H./Hamburg 0.1228 0.1114 Lower Sax./Bremen −0.2920 0.1173 Hesse −0.4159 0.1530 Rhineland-Palatinate 0.2307 0.1042 Baden-Württemberg −0.2196 0.1087 Bavaria −0.1772 0.0953 Firm size of earlier job: under 11 employees is omitted category 11–200 employees 0.1238 0.0802 201–500 employees 0.1326 0.1231 More than 500 employees 0.2830 0.1046 Ethnicity: German is omitted category Foreigner −0.1674
0.1122
−0.2005
0.1157
−0.2394
0.0885
Gender: male is omitted category Female −0.0601
0.0794
0.3674
0.3553
−0.1750
0.0706
0.1209 0.0886
0.4685 0.1135
0.5309 −0.0751
0.3661 0.0911
Earnings censored at taxation threshold: uncensored is omitted category Earnings cens. −0.1651 0.4582 0.5050 0.5173 Employed—6 months 0.1134 0.1079 −0.1209 0.1003 Employed—12 months 0.2125 0.0976 0.1670 0.1028 Entry months into UE 0.0046 0.0089 0.0425 0.0104
−0.7311 0.1102 0.1144 0.0164
0.4376 0.0913 0.0858 0.0086
Earnings information: no earnings is omitted category Positive earnings reported 0.6413 0.4277 Log earnings (if reported) −0.0341 0.1001
354
B. Fitzenberger, S. Speckesser
Table 6 continued Training starts during 1–6 months of unemployment
7–12 months
13–24 months
Regressor
Coef.
SE
Coef.
SE
Coef.
SE
Intercept
−3.9682
1.9912
−12.3984
2.3556
−5.6063
1.9325
−0.1141 0.2828 −0.3173 0.2763
0.2636 0.2552 0.2756 0.2594
0.1591 0.3919 0.2492 0.3353 0.5106 0.5082
0.2265 0.2363 0.2633 0.2788 0.1713 0.2644
Interactions Blue collar × age 35–44 White collar × age 35–44 Blue collar × age 45–55 White collar × age 45–55 Vocational training × age 40–55 C./U. degree × age 40–55 Female × blue collar 2 Female × white collar Female × vocational training Female × college/university Nobs
−0.3531 0.2894 −0.4921 −0.5432 12,320
8,121
0.3486 0.3359 0.2123 0.3247 5,992
Note: This table reports the coefficient estimates for the treatment probits by time window considered
holds only for TR16 and TR1324, but the number of foreigners is small here). Higher previous earnings increase the likelihood of receiving treatments TR16 and TR1324 in East Germany, whereas there are no clear cut effects in West Germany. Also, the month of entry into unemployment (seasonal effect) seems to play a role in East Germany but not in West Germany. White collar workers are more likely to receive treatment in a number of cases. In West Germany, females are less likely to participate in TR1324, and, when highly educated, in TR712. There is no significant gender effect for TR16 and females are more likely to participate in TR712 when they were white collar workers before. In East Germany, females are more likely to receive later treatments TR712 and TR1324 in a number of cases. There, younger females are more likely to receive TR712 and females coming from certain industries (Agriculture, Basic Materials, Production oriented services, Trade, Banking) are more likely to receive TR1324. The estimation results show that the determinants of SPST program participation differs strongly by the elapsed unemployment duration.
5.3 Baseline treatment effects As our baseline model based on the estimated propensity scores in the previous subsection, we match SPST participants and nonparticipants who started unemployment in the same month and we only use nonparticipants who are still unemployed in the month before the treatment period starts. The estimated TT is then estimated separately for month τ = 1, . . . , 36 after the beginning of the SPST program according to (5) where the expected nontreatment employment outcome is obtained by means of a local linear regression on the propensity
355
Professional skills and techniques in Germany Table 7 Probit estimates SPST East Germany Training starts during 1–6 months of unemployment
7–12 months
13–24 months
Regressor
Coef.
SE
Coef.
SE
Coef.
SE
Intercept
3.8672
1.8963
−14.6619
0.1465
−4.8182
1.9178
0.1063 0.1191 0.1084 0.1269 0.1050
0.2743 −0.0703 0.2889 0.2954 −0.0936
0.1995 0.2250 0.2020 0.2139 0.2078
0.1842 −0.1140 0.0221 −0.0984 −0.2088
0.1032 0.1136 0.1763 0.1823 0.1699
Industry: agriculture/basic materials is omitted category Metal/electronics 0.1144 0.1153 0.2619 Light industry −0.0740 0.1534 −0.0391 Construction −0.3643 0.1443 −0.1562 Prod. oriented service −0.0557 0.1049 0.0912 Consumption service/state −0.2255 0.1020 0.0045
0.1412 0.1748 0.1766 0.1255 0.1182
0.1786 0.5548 0.3592 0.3257 0.5163
0.2413 0.2738 0.2393 0.2152 0.2035
Occupational status: part-time is omitted category Blue collar −0.2016 0.1282 White collar 0.1142 0.1245
−0.1154 0.2890
0.1184 0.1140
Land: Mecklenb./West Pomerania is omitted category Berlin/Brandenb. −0.3239 0.1026 Saxony-A. −0.3250 0.1130 Saxony −0.1120 0.0967 Thuringia −0.2454 0.1151
−0.1685 −0.2075 −0.0339 −0.3723
0.1192 0.1223 0.1405 0.1388
−0.1392 −0.2607 0.0715 −0.2070
0.1062 0.1146 0.1035 0.1192
Firm size of earlier job: under 11 employees is omitted category 11–200 employees 0.0474 0.0841 0.0641 201–500 employees 0.1366 0.1105 0.0700 more than 500 employees 0.2515 0.0999 0.2339
0.0877 0.1168 0.1043
0.2405 0.4344 0.2049
0.0837 0.1038 0.1010
Level of education: no vocational degree omitted category Vocational training 0.3443 0.1320 0.2317 College/university degree 0.4133 0.1684 0.2762
0.1129 0.1631
0.0029 −0.0470
0.1251 0.2207
−1.0256
0.3841
0.7723
0.2137
−0.7781 0.2910
0.3866 0.1044
1.1531
0.4456
0.0112
0.0085
Age: below 30 is omitted category Age 30–34 0.1303 Age 35–39 −0.1209 Age 40–44 0.1626 Age 45–49 −0.0541 Age 50–55 0.0313
Ethnicity: German is omitted category Foreigner −0.5187
0.3831
Population density: rurual area is omitted category Med. pop. Dense area Metropolitan Gender: male is omitted category Female −0.0759
0.0744
−0.0322 −0.0574 −0.2557
0.1032 0.2292 0.1176
0.3397
0.1904
Earnings information: no earnings is omitted category Positive earnings reported −1.2245 0.4480 Log earnings (if reported) 0.3858 0.1179 Earnings censored at taxation threshold: uncensored is omitted category Earnings cens. 1.1345 0.5139 Employed—6 months −0.2090 0.0959 −0.0894 0.0965 Employed—12 months 0.1823 0.0935 −0.0971 0.0880 UE-entry −0.0268 0.0085 0.0568 0.0096
356
B. Fitzenberger, S. Speckesser
Table 7 continued Training starts during 1–6 months 7–12 months of unemployment
13–24 months
Regressor
Coef.
SE
Coef.
SE
Coef.
SE
Intercept
3.8672
1.8963
−14.6619
0.1465
−4.8182
1.9178
0.1594 0.5031
0.1681 0.2748
−0.3496 −0.3898 −0.3027 −0.1008 −0.5145
0.2990 0.3160 0.3260 0.2488 0.2365
Interactions Vocat’l trai. × age 40–55 C./U. degree × age 40–55 Female × age 30–34 Female × age 35–39 Female × age 40–44 Female × age 45–49 Female × age 50–55 Female × metal/electronics Female × light industry Female × construction Female × prod. oriented service Female × consumption service/state Nobs
−0.0766 0.2438 −0.2864 −0.6133 −0.0751
7,297
5,062
0.2393 0.2627 0.2481 0.2753 0.2470
3,517
Note: This table reports the coefficient estimates for the treatment probits by time window considered
score22 among the nontreated considered. A comparison of the estimated propensity score for SPST participants and nonparticipants shows a close overlap for each stratum defined by the month of entry into unemployment and the beginning of the SPST treatment.23 We obtain an estimate for the variance of the estimated treatment effects through bootstrapping the entire observation vector for an observed spell in our inflow sample. This way, we take account of possible autocorrelation in the outcome variable. Inference is based on 500 resamples.24 Before estimating the average difference in matched samples, we explore the specification of our matching approach: •
•
After matching, we do not find any siginificant differences in the means of the observable characteristics between the participants and characteristics predicted for the non-treatment outcome based on local linear regressions. The matching procedure is successful in using a suitable control group with respect to the observable covariates.25 A test of the balancing properties of propensity score matching explores differences in the outcome variables between participants and matched
22 We use the fitted index X β from the Probit estimates. i 23 These results are available upon request. 24 This still fairly small number of resamples is due to the high computation time involved. However, results do seem to be quite reliable. Comparing the results based on 500 resamples with the results based on only the first 200 resamples, we do not find any noticeable differences. 25 Results for these balancing tests are available upon request.
Professional skills and techniques in Germany
357
nonparticipants during months 1 to 12 before the beginning of the unemployment spell. There are basically no significant employment differences and, for most cases, the employment histories are balanced perfectly. Figures 1, 2, 3, 4, 5 and 6 graphically represent the evaluation results. Each figure contains a panel of three graphs. The top graph involves the estimated average treatment effect for the treated during months τ = 1 to τ = 36 after the beginning of the treatment and the differences before months 1 to 12 (τ = −1,…,τ = −12) before the beginning of the unemployment spell, where time τ is given on the vertical axis. The graph in the middle shows the average employment outcome for the treatment group and the bottom graph shows the average estimated nontreatment outcome based on the matched nonparticipants. We put pointwise 95%-confidence intervals around the estimates. The patterns of the estimated treatment effects for months 1 to 36 after the beginning of the program are surprisingly similar across the different settings, even though the average employment rates in the middle and bottom graph decline for later program starts. Treated individuals show an increase in employment rates during the first year and then remain at a fairly constant level during the second and third year. Only for late treatment TR1324 in West Germany, we observe a decline of about 10 percentage points (ppoints) after 2.5 years. In West Germany, treated individuals with early treatment TR16 reach an employment rate of about 45% after 1 year. For TR712, this lies around 51 to 53% and for TR1324 around 28 to 30%. The expected average nontreatment outcome converges to a level of around 36% for TR16, around 27% for TR712, and around 29% for TR1324. As to be expected, the future employment chances for individuals decline with longer elapsed unemployment duration. Interestingly, the effect of the treatment seems to be quite similar, except for the decline at the end for TR1324. We find a negative lock–in effect for the period right after the beginning of the program and significantly positive treatment effects on employment rates of about 10 ppoints and above after a year. For TR712 in West Germany, the estimated treatment effect of around 20 ppoints is the highest among the three cases. Though similar in nature, the results for East Germany show some differences. It takes about 2 years for the employment rates to reach their highest level. For TR16, the treatment group reaches an employment rate of about 62%, for TR712 of about 45 to 50%, and for TR1324 of about 35%. For TR1324, we see a small decline at the end. The estimated nontreatment employment rates stabilize at a level of about 50% for TR16, about 35 to 40% for TR712, and about 25 to 30% for TR1324. Again for TR1324, we observe a small decline at the end. The estimated treatment effects again show a negative lock-in effect for the period right after the beginning of the program and a significantly positive treatment effect of about 10 ppoints after about 1.5 years. The long–run treatment effect is slightly lower for the later treatment TR1324, but still significantly positive. A comprehensive cost-benefit analysis of the SPST program is not possible mainly for two reasons. First, we lack information on the monetary costs
358
B. Fitzenberger, S. Speckesser
0,40
Average treatment effect for participants in specific skills with previous unemployment 1-6 months, West Germany
% points difference employment rate
0,30 0,20 0,10 0,00 -0,10
Average difference and 95% bootstrap confidence intervals
-0,20
+36
+34
+32
+30
+28
+26
+24
+22
+20
+18
+16
+14
+12
+8
+10
+6
+4
+2
-2
-4
-6
-8
-10
-12
-0,30
Month before unemployment (-) and after beginning of treatment (+) Employment rate for participants in specific skills with previous unemployment 1-6 months, West Germany 1,00
employment rate (%)
0,80
0,60
0,40
0,20 Average difference and 95% bootstrap confidence intervals
0,00
+36
+34
+32
+30
+28
+26
+24
+22
+20
+18
+16
+14
+12
+10
+8
+6
+4
+2
-2
-4
-6
-8
-10
-12
-0,20
Month before unemployment (-) and after beginning of treatment (+)
employment rate (%)
1,00
Estimated non-treatment employment for participants in specific skills with previous unemployment 1-6 months, West Germany
Average difference and 95% bootstrap confidence intervals
0,80
0,60
0,40
Month before unemployment (-) and after beginning of treatment (+)
Fig. 1 SPST treatment West Germany months 1–6
+36
+34
+32
+30
+28
+26
+24
+22
+20
+18
+16
+14
+12
+8
+6
+4
+2
-2
-4
-6
-8
-10
-12
0,00
+10
0,20
359
Professional skills and techniques in Germany
0,40
Average treatment effectfor participants in specific skills with previous unemployment 7-12 months, West Germany
% points difference employment rate
0,30
0,20
0,10
0,00 Average difference and 95% bootstrap confidence intervals
-0,10
+36
+34
+32
+30
+28
+26
+24
+22
+20
+18
+16
+14
+12
+8
+10
+6
+4
+2
-2
-4
-6
-8
-10
-12
-0,20
Month before unemployment (-) and after beginning of treatment (+) Employment rate for participants in specific skills with previous unemployment 7-12 months, West Germany
1,00
Average difference and 95% bootstrap confidence intervals
employment rate (%)
0,80
0,60
0,40
0,20
+36
+34
+32
+30
+28
+26
+24
+22
+20
+18
+16
+14
+12
+8
+10
+6
+4
+2
-2
-4
-6
-8
-10
-12
0,00
Month before unemployment (-) and after beginning of treatment (+) Estimated non-treatment employment for participants in specific skills with previous unemployment 7-12 months, West Germany
employment rate (%)
1,00
0,80
Average difference and 95% bootstrap confidence intervals
0,60
0,40
0,20
Month before unemployment (-) and after beginning of treatment (+)
Fig. 2 SPST treatment West Germany months 7–12
+36
+34
+32
+30
+28
+26
+24
+22
+20
+18
+16
+14
+12
+8
+10
+6
+4
+2
-2
-4
-6
-8
-10
-12
0,00
360
B. Fitzenberger, S. Speckesser Average treatment effect for participants in specific skills with previous unemployment 13-24 months, West Germany 0,40
% points difference employment rate
0,30
0,20
0,10
0,00 Average difference and 95% bootstrap confidence intervals
-0,10
+36
+34
+32
+30
+28
+26
+24
+22
+20
+18
+16
+14
+12
+8
+10
+6
+4
+2
-2
-4
-6
-8
-10
-12
-0,20
Month before unemployment (-) and after beginning of treatment (+) Employment rate for participants in specific skills with previous unemployment 13-24 months, West Germany 1,00 Average difference and 95% bootstrap confidence intervals
employment rate (%)
0,80
0,60
0,40
0,20
+36
+34
+32
+30
+28
+26
+24
+22
+20
+18
+16
+14
+12
+10
+8
+6
+4
+2
-2
-4
-6
-8
-10
-12
0,00
Month before unemployment (-) and after beginning of treatment (+)
1,00
Estimated non-treatment employment for participants in specific skills with previous unemployment 13-24 months, West Germany
Average difference and 95% bootstrap confidence intervals
employment rate (%)
0,80
0,60
0,40
0,20
Month before unemployment (-) and after beginning of treatment (+)
Fig. 3 SPST treatment West Germany months 13–24
+36
+34
+32
+30
+28
+26
+24
+22
+20
+18
+16
+14
+12
+10
+8
+6
+4
+2
-2
-4
-6
-8
-10
-12
0,00
361
Professional skills and techniques in Germany Average treatment effect for participants in specific skills with previous unemployment 1-6 months, East Germany 0,40
% points difference employment rate
0,30 0,20 0,10 0,00 -0,10
Average difference and 95% bootstrap confidence intervals
-0,20
+36
+34
+32
+30
+28
+26
+24
+22
+20
+18
+16
+14
+12
+8
+10
+6
+4
+2
-2
-4
-6
-8
-10
-12
-0,30
Month before unemployment (-) and after beginning of treatment (+) Employment rate for participants in specific skills with previous unemployment 1-6 months, East Germany 1,00
employment rate (%)
0,80
0,60
0,40
Average difference and 95% bootstrap confidence intervals
0,20
+36
+34
+32
+30
+28
+26
+24
+22
+20
+18
+16
+14
+12
+10
+8
+6
+4
+2
-2
-4
-6
-8
-10
-12
0,00
Month before unemployment (-) and after beginning of treatment (+) Estimated non-treatment employment for participants in specific skills with previous unemployment 1-6 months, East Germany 1,00 Average difference and 95% bootstrap confidence intervals
employment rate (%)
0,80
0,60
0,40
0,20
Month before unemployment (-) and after beginning of treatment (+)
Fig. 4 SPST treatment East Germany months 1–6
+36
+34
+32
+30
+28
+26
+24
+22
+20
+18
+16
+14
+12
+10
+8
+6
+4
+2
-2
-4
-6
-8
-10
-12
0,00
362
B. Fitzenberger, S. Speckesser Average treatment effect for participants in specific skills with previous unemployment 7-12 months, East Germany 0,40
% points difference employment rate
0,30 0,20 0,10 0,00 -0,10 Average difference and 95% bootstrap confidence intervals
-0,20
+36
+34
+32
+30
+28
+26
+24
+22
+20
+18
+16
+14
+12
+8
+10
+6
+4
+2
-2
-4
-6
-8
-10
-12
-0,30
Month before unemployment (-) and after beginning of treatment (+) Employment rate for participants in specific skills with previous unemployment 7-12 months, East Germany 1,00 Average difference and 95% bootstrap confidence intervals
employment rate (%)
0,80
0,60
0,40
0,20
0,00
+36
+34
+32
+30
+28
+26
+24
+22
+20
+18
+16
+14
+12
+8
+10
+6
+4
+2
-2
-4
-6
-8
-10
-12
-0,20
Month before unemployment (-) and after beginning of treatment (+)
employment rate (%)
1,00
Estimated non-treatment employment for participants in specific skills with previous unemployment 7-12 months, East Germany
Average difference and 95% bootstrap confidence intervals
0,80
0,60
0,40
Month before unemployment (-) and after beginning of treatment (+)
Fig. 5 SPST treatment East Germany months 7–12
+36
+34
+32
+30
+28
+26
+24
+22
+20
+18
+16
+14
+12
+8
+6
+4
+2
-2
-4
-6
-8
-10
-12
0,00
+10
0,20
363
Professional skills and techniques in Germany Average treatment effect for participants in specific skills with previous unemployment 13-24 months, East Germany 0,40
% points difference employment rate
0,30 0,20 0,10 0,00 Average difference and 95% bootstrap confidence intervals
-0,10
+36
+34
+32
+30
+28
+26
+24
+22
+20
+18
+16
+14
+12
+8
+10
+6
+4
+2
-2
-4
-6
-8
-10
-12
-0,20
Month before unemployment (-) and after beginning of treatment (+) Employment rate for participants in specific skills with previous unemployment 13-24 months, East Germany
1,00
Average difference and 95% bootstrap confidence intervals
employment rate (%)
0,80
0,60
0,40
0,20
0,00
+36
+34
+32
+30
+28
+26
+24
+22
+20
+18
+16
+14
+12
+10
+8
+6
+4
+2
-2
-4
-6
-8
-10
-12
-0,20
Month before unemployment (-) and after beginning of treatment (+) Estimated non-treatment employment for participants in specific skills with previous unemployment 13-24 months, East Germany
1,00
Average difference and 95% bootstrap confidence intervals
employment rate (%)
0,80
0,60
0,40
0,20
Month before unemployment (-) and after beginning of treatment (+)
Fig. 6 SPST treatment East Germany months 13–24
+36
+34
+32
+30
+28
+26
+24
+22
+20
+18
+16
+14
+12
+8
+10
+6
+4
+2
-2
-4
-6
-8
-10
-12
0,00
364
B. Fitzenberger, S. Speckesser
and on transfer payments during the treatment and the unemployment spell. Second, we can not analyze the employment effects after 36 months. As a first step to contrast the initial negative lock-in effects of the programs with the later positive program effect, we calculate the cumulated effects of the program 12, 24, and 36 months after the beginning of the program (see Lechner et al. 2005a, b, for a similar exercise). The cumulated effects are calculated as the sum of the effects depicted in Figs. 1, 2, 3, 4, 5 and 6 starting in month 1 and summing up to months 12, 24, and 36, respectively. Table 8 provides the results. The estimated standard errors are based on the bootstrap standard errors for the month specific treatment effects. For West Germany, the cumulated effects after 12 months are still significantly negative for TR16 and positive but not significant for later treatments. The cumulated effects increase with longer time horizons and become significantly positive at a five percent significance level after 36 months (for TR712 already after 24 months) for a one-sided test. For East Germany, the longer duration of the treatment spells results in a stronger, significantly negative lock-in effect after 12 months. The cumulated effect is still negative after 24 months but only signficantly so for TR712. After 36 months the cumulated effects turn positive but they are still not significant. It is likely that a significantly positive cumulated effect can be found for an even longer time horizon for East Germany. This is not certain, however, since there is a slight tendency for the period specific effects to decline after about 2.5 years and since the standard errors tend to increase with a longer horizon. As a further balancing test, it remains to discuss the estimated employment effects in Figs. 1,2,3,4,5 and 6 for the 12 months before the beginning of the unemployment spell. To be precise, these are the 12 months before the beginning of transfer payments by the Federal Labor Office after having lost the job. Individuals may have become unemployed earlier than this first month of unemployment period though having had a job in the recent past is a prerequisite for transfer payment. In fact, the employment rate among the treated lies some-
Table 8 Cumulated average treatment effects Training starts during 1–6 months After
Coef.
West Germany 12 months −1.17966 24 months 0.016994 36 months 1.60954 East Germany 12 months −1.62331 24 months −0.660957 36 months 0.580934
7–12 months of unemployment
13–24 months
SE
Coef.
SE
Coef.
SE
0.2201 0.5532 0.8799
0.433158 3.11666 5.89091
0.3395 0.7772 1.1616
0.061272 1.52291 3.15718
0.2192 0.5283 0.8003
0.2391 0.5590 0.8202
−1.56347 −1.06095 0.246313
0.1743 0.4295 0.6885
−1.01759 −0.529110 0.413204
0.1443 0.3670 0.5667
Note: This table reports the cumulated sum of the monthly employment effects 12, 24, and 36 months after the beginning of treatment
Professional skills and techniques in Germany
365
where between 75 and 90% during the 12 months before the start of transfer payments. In month 1, the employment rate is above 80% in all case, i.e. in the vast majority of cases the start of the transfer payment coincides with the start of the unemployment spell. For months 1 to 12, the estimated differences between the employment rates of the treatment group and of similar nontreated individuals are not significantly different from zero in all cases, except for month 1 for TR1324 in East Germany. In the latter case, the rejection is not strong. Since all individuals become eventually unemployed in month 0 (the time between the beginning of the unemployment spell and the beginning of the treatment), our test should focus on the differences during the earlier phase of the twelve months before. For this earlier phase, there is no evidence of systematic differences in employment rates between treated and nontreated individuals after matching. In order to check, whether out matching approach is controlling for something important, we have also obtained two alternative estimates of the employment differences in the additional appendix (these results are available upon request). We contrast the baseline treatment effects with uncorrected employment differences between treated and nontreated individuals. The results shows noticeable differences, especially before the beginning of unemployment. We conclude that our matching approach controls for differences across individuals which are relevant for their employment outcomes. 5.4 Sensitivity analysis As sensitivity analysis of the robustness of our baseline results, we discuss three alternative estimates. First, we estimate the TT of SPST against no-training. Second, we impose a strict common support condition. And third, we extend the employment history before the beginning of the unemployment spell for the West German data set. First, the estimated TT for SPST against the comprehensive alternative of no participation in SPST might be difficult to interpret from a policy perspective since the alternative involves participation in other programs. To investigate the sensitivity of our results, we also estimate the TT for participation in SPST against the alternative of no participation in any of the training programs considered in this paper. Figures 7 and 8 involve the comparison between our baseline TT estimates discussed in the previous subsection (SPST against noSPST) and the TT estimates against the alternative of no participation any of the training programs (SPST against no-Training). The results show no substantive differences between comparing SPST to no-SPST and SPST to no-training. This also reflects the fact that the number of participants in other treatments is very small compared to the number of nontreated individuals, see Sect. 3.4. For West Germany, there are no discernible differences. For East Germany, both the negative effects during the lockin-period and the positive mediumand long-term treatment effects are a bit stronger. Second, Table 9 provides evidence on the common support in the estimated propensity score between treated and potential matches among the non-treated
366
B. Fitzenberger, S. Speckesser Sensitivity analysis for SPST treatment effect, previous unemployment 1-6 months, West Germany 0,3 0,2 0,1 0 -0,1
36
32
28
16
12
8
4
-4
-8
-12
-0,3
24
Base specification SPST against no-training Common support
-0,2
20
% points difference employment rate
0,4
Month before unemployment (-) and after beginning of treatment (+) Sensitivity analysis for SPST treatment effect, previous unemployment 7-12 months, West Germany 0,3 0,2 0,1 0
32
36
32
36
28
16
12
8
4
-4
-8
-0,3
-12
-0,2
24
Base specification SPST against no-training Common support
-0,1
20
% points difference employment rate
0,4
Month before unemployment (-) and after beginning of treatment (+) Sensitivity analysis for SPST treatment effect, previous unemployment 13-24 months, West Germany 0,3 0,2 0,1 0
28
16
12
8
4
-4
-8
-0,3
-12
-0,2
24
Base specification SPST against no-training Common support
-0,1
20
% points difference employment rate
0,4
Month before unemployment (-) and after beginning of treatment (+)
Fig. 7 Sensitivity analysis West Germany—SPST against no-Training and common support
inviduals who are unemployed in the month before treatment starts. We calculate the absolute difference in the estimated propensity score, represented by the fitted index from the Probit. In all cases for East Germany, the 95%-quantile is below 0.029, which corresponds to a difference in the treatment probability of at most 1.2 ppoints. For West Germany, the 95%-quantile is always below 0.009
367
Professional skills and techniques in Germany Sensitivity analysis for SPST treatment effect, previous unemployment 1-6 months, East Germany
0,4
0,2 0,1 0 -0,1
36
32
28
24
16
12
8
4
-4
-8
-0,3
20
Base specification SPST against no-training Common support
-0,2 -12
% points difference employment rate
0,3
Month before unemployment (-) and after beginning of treatment (+) Sensitivity analysis for SPST treatment effect, previous unemployment 7-12 months, East Germany 0,3 0,2 0,1 0 -0,1
32
36
32
36
28
24
16
12
8
4
-4
-8
-0,3
20
Base specification SPST against no-training Common support
-0,2
-12
% points difference employment rate
0,4
Month before unemployment (-) and after beginning of treatment (+) Sensitivity analysis for SPST treatment effect, previous unemployment 13-24 months, East Germany
% points difference employment rate
0,4 0,3 0,2 0,1 0 -0,1 Base specification SPST against no-training Common support
-0,2
28
24
20
16
12
8
4
-4
-8
-12
-0,3
Month before unemployment (-) and after beginning of treatment (+)
Fig. 8 Sensitivity analysis East Germany—SPST against no-Training and common support
corresponding to a difference in the treatment probability of at most 0.4 ppoints. To investigate further whether lack of common support could affect our baseline results, we have reestimated the TT only for those treated individuals for whom a nearest neighbor with a difference in the propensity score of at most 0.05 is available for matching. Figures 7 and 8 involve the comparison of the resulting
368
B. Fitzenberger, S. Speckesser
Table 9 Analysis of common support Absolute differences in propensity score (latent index of Probit model) between treated individuals and respective nearest nontreated neighbors, who are unemployed in the month before treatment starts Treatment West Germany Months 1–6 Months 7–12 Months 13–24 East Germany Months 1–6 Months 7–12 Months 13–24
Average
95%–quantile
99%–quantile
Maximum
0.0027 0.0030 0.0031
0.0087 0.0077 0.0085
0.0403 0.0234 0.0625
0.1361 0.1747 0.1005
0.0041 0.0093 0.0068
0.0261 0.0286 0.0267
0.0626 0.2128 0.0845
0.1005 0.3495 0.1954
Note: This table reports descriptive statistics about absolute differences in estimated propensity scores
TT point estimates with our baseline TT estimates discussed in the previous subsection (both SPST against no-SPST). The results show that the point estimates show that the point estimates are basically the same. This indicates that the substance of our baseline results discussed above is robust with respect to the potential problem of lack of common support. Third, for East Germany, we do not have the employment history before 1992. It is likely that the earlier employment history in East Germany does not play a role since in the former GDR there was basically no unemployment. For West Germany, we constructed the employment history up to 36 months before the beginning of unemployment. Adding employment status 18, 24, 30, and 36 months before the beginning of unemployment to the baseline Probit specification in Table 6, these additional regressors are never jointly significant (the test results are available upon request) and the individual t-statistics are never significant except for one case. Using the estimated propensity scores from these augmented Probits, the TT point estimates (‘longer employment history’) displayed in Fig. 9 virtually coincide with the baseline estimates between month 12 and 36. There is no indication that treated individuals and match non–treated individuals are not well balanced before months 12. Thus, our results indicate that the omission of the employment history beyond 12 months before the beginning of the unemployment spell does not invalidate our baseline results. 6 Conclusions Based on a unique administrative data set for Germany, which has only been made available recently, we analyze the employment effects of the provision of SPST at the individual level. Specifically, we estimate the average treatment effect on the treated (TT), i.e. the differential impact the treatment shows for those individuals who participate in an SPST program. We take the 1993 inflow sample into unemployment and we distinguish three types of treatment
369
Professional skills and techniques in Germany Sensitivity analysis for SPST treatment effect, previous unemployment 1-6 months, West Germany 0,4 Base specification Specification with longer employment history
% points difference employment rate
0,3 0,2 0,1 0 -0,1 -0,2
36
30
24
18
12
6
-6
-12
-18
-24
-30
-36
-0,3
Month before unemployment (-) and after beginning of treatment (+) Sensitivity analysis for SPST treatment effect, previous unemployment 7-12 months, West Germany 0,4 Base specification Specification with longer employment history
% points difference employment rate
0,3 0,2 0,1 0 -0,1
36
30
24
18
12
6
-6
-12
-18
-24
-30
-0,3
-36
-0,2
Month before unemployment (-) and after beginning of treatment (+) Sensitivity analysis for SPST treatment effect, previous unemployment 13-24 months, West Germany 0,4 Base specification Specification with longer employment history
% points difference employment rate
0,3 0,2 0,1 0 -0,1
36
30
24
18
12
-6
-12
-18
-24
-30
-36
-0,3
6
-0,2
Month before unemployment (-) and after beginning of treatment (+)
Fig. 9 Sensitivity analysis West Germany—Employment history 36 months before beginning of unemployment spell (longer employment history)
370
B. Fitzenberger, S. Speckesser
depending upon the month in which the SPST course starts relative to the elapsed unemployment duration. We distinguish between the programs starting during 1 to 6, 7 to 12, and 13 to 24 months of unemployment. We estimate the TT for participation in SPST against the comprehensive alternative nonparticipation in SPST which includes participation in another program of active labor market policy. The analysis is conducted separately for West and East Germany. The general pattern of the estimated treatment effects is quite similar for the three time intervals of elapsed unemployment considered. We find negative lock-in effects shortly after the treatment starts. After a while the effects turn positive and they persist almost completely until the end of our evaluation period. The positive effects are stronger in West Germany compared to East Germany and the lock-in effects are stronger in East Germany. The cumulated employment effects 36 months after the beginning of the treatment are significantly positive in West Germany. They are also positive for East Germany, but not significantly so. These results are robust against a number of sensitivity checks. In light of the differences in the magnitudes of employment effects of SPST in East and West Germany as well as the different durations of the lock-in effects, it is tempting to speculate why SPST is less effective in East Germany compared to West Germany. Even though different treatment effects might arise because of the heterogeneity of treatment effects and differences in the treated populations, to make this argument one would have to address the fact that the observable characteristics of the training population tend to be better in East Germany than in West Germany, thus suggesting higher employment effects in East Germany. We see two complementary potential reasons for a lower effectiveness of SPST in East Germany: First, lower employment effects might have been caused by training providers being of poorer quality in East Germany, where they had only recently been established after German unification, compared to the West. Second, the skill requirements expected in the East did not materialize and many of the newly acquired skills provided in SPST (e.g. for employment in the construction sector, see Lechner et al. 2005b for a related argument) failed to match the actual jobs opportunities in the East. The longer lock-in periods in East Germany could then reflect longer search periods because of a stronger skill mismatch. Clearly, the reasons for the differences in the treatment effects between East and West Germany need to be explored further in future research. Our study does draw a somewhat more positive picture of public sector sponsored training compared to most of the previous studies based on survey data. Our results are somewhat similar to those obtained in the studies Lechner et al. (2005a, b) based on the same data source, though the exact treatment definition, the choice of valid observations, and the employed econometric methods differ between these studies and ours. However, an overall assessment of the microeconomic effects is not possible since various necessary information for a comprehensive cost-benefit-analysis are lacking in our data set.
Professional skills and techniques in Germany
371
Acknowledgements We are grateful for very helpful comments by three anonymous referees. We benefitted from comments in seminars at ZEW Mannheim, IAB Nürnberg, Goethe University Frankfurt, Stanford University, University of Hohenheim, RWI Essen, and Rauischolzhausen. We thank Annette Bergemann, Stefan Bender, Reinhard Hujer, Michael Lechner, Konrad Menzel, Ruth Miquel, Don Rubin, Jeff Smith, Robert Völter, and Conny Wunsch for helpful discussions. All errors are our sole responsibility. This paper is dedicated to our friend and colleague Reinhard Hujer on the occasion of his retirement from Goethe University in September 2005. We have had various stimulating discussions on the evaluation of active labor market policy with Reinhard Hujer. He motivated us to write our first survey on this topic in 1999. This paper is part of the project “On the effectiveness of further training programs. An evaluation based on register data provided by the Institute of Employment Research, IAB (Über die Wirksamkeit von Fortbildungs- und Umschulungsmaßnahmen. Ein Evaluationsversuch mit prozessproduzierten Daten aus dem IAB)” (IAB project number 6-531A). The data were compiled in this joint project with the Swiss Institute for Internationale Economics and Applied Economic Research at the University of St. Gallen (SIAW) and the Institut für Arbeitsmarkt- und Berufsforschung (IAB). We gratefully acknowledge financial support by the IAB.
Appendix Types of further training: a classification The basic regulation of further training provides only a very basic framework, but does not define specific treatments with respect to integration targets or for target groups. Very different treatments can be implemented under the same regulation (e.g. training for career advancement or short-term courses for very long-term unemployed are both reported as “further vocational training”). Therefore, earlier decriptive studies26 on the types of treatment do not distinguish treatments providing basic social skills or skills preparing the job-search from treatment offering certified professional skills. As this study uses merged data, we can additionally identify how close the treatment is to a firm specific labor market by exploiting the information from the occupational status variable as well as we can distinguish how specific the training is by using all available information from benefit payments and the type of training variable of the FuU-data. As the training data is partially incomplete, the use of employment data is additionally necessary to identify the full extent of the participation. The combination of these different sources allows us to identify informative (and coherent) types of treatment applying a typology relying on the type of training from FuU-data (see Bender et al. 2005, Chap. 2.3, for an in-depth description of the information provided) and the closeness to internal labor markets as indicated by the IABS-data on employment status. The combination of both—the employment status and the program information—allows us to identify specific treatments for similar groups. While the program information “further vocational training” might comprise both employed and unemployed participants, the employment status allows additionally to 26 One of these studies based on the reported FuU-data by Blasche and Nagel (1995) does distinguish whether the training was carried out as an adjustment or a retraining and whether it was a full-time or part-time treatment.
372
B. Fitzenberger, S. Speckesser
identify the target group (“re-integration” for specific groups or unemployed or “career advancement” for employees) or to indicate how close the program is related to an internal labor market. A combination of training and employment data is therefore considered to be more informative than the unmodified information from the training data, since these do not show the conditions under which this progam is delivered. We suggest to distinguish seven different types of further training. These treatments differ according to the level of occupational specific skills and closeness to the internal labor market. The following section provides seven different types of further training [referred to as type (a)–(g)]. (a) Preparation, social skills and short-term training This type of training provides non-vocational skills in educational institutions or participants are taking part in programs evaluating their problems in finding regular employment (Feststellungsmaßnahmen, §41a AFG). The training provides skills on a general level and focuses on an improvement of the job search process. In other cases short-term training is implemented as a first stage for continued training, so that the programs prepare the participants for another further training (Vorschaltmaßnahmen). In short-term training, the provision of profession specific skills is supposed to be of minor importance and individuals who enter this type of treatment are supposed to lack fundamental general skills and social skills for job search. We assume these treatments not to provide formal certificates or degrees. (b) Provision of specific professional skills and techniques The objective of this type of further education is the improvement of the starting position in finding a new job by providing additional skills and specific professional knowledge in short-term and medium-term courses. These programs serve to learn or freshen up of single skills, e.g. computer skills or the new operational practises. They are is intended for unemployed or persons at risk of becoming unemployed in order to facilitate integration into full employment. This type of treatment corresponds to the vast majority of public sector sponsored further training programs and is usually carried out by external educational institutions. Courses provide classroom training and the acquisition of professional knowledge by working experience. In most cases, participants are provided certificates about the courses, signalling refreshed or newly acquired skills and the amount of theory and work-experience achieved. The treatment is specific to the skills of the first vocational training degree and aims at increasing the individual chances of finding new employment within their profession. Compared to the short-term courses above, this type of training is supposed to influence the matching probability of the unemployed with jobs offered because of formal certificates after training. (c) Qualification via the educational system/retraining This type of training consists of the provision of a new and comprehensive training according to the regulation of the German dual system of vocational training. It is offered to individuals who completed already a first vocational training and face severe difficulties in finding a new employment within their profession. Retraining is formal vocational training into a certified occupation after the end of a first
Professional skills and techniques in Germany
373
vocational training. It might however also be offered to individuals without a first formal training. Up to 94, this type of treatment is also accessible to individuals without the formal criterion of “necessity” for career advancement. Participants are then granted an income maintenance as a loan. Qualification via the educational system/retraining provides widely accepted formal certificates according to the vocational training of the German dual system, which consists of both, theoretical training and work experience. The theoretical part of the training takes place in the public education system. The practical part of the program is often carried out in firms providing participants work experience in their field, but sometimes also in training establishments of the institutions providing this type of training. This type of treatment aims at the achievement of a formal job qualification in order to improve the job match. (d) Training for specific job offers The main objective of this type of training is the provision of specific occupational and social skills to individuals who intend to accept a job offer and to fulfill the formal requirements for the specific job. Training of this type provides specific skills and qualification as described under (b). Generally individuals pass through short-term courses with specific professional skills in order to meet the requirements for a job offer. The contents such courses are closely linked to the employment, in which individuals are employed afterwards. Usually courses take place in the training division of companies. Contents of the courses also consist of social, personal and methodological knowledge. Compared to training which offers a certification after the end of a program, this type of training has only little impact on future employment prospects, once the job match with the precise employer is achieved. (e) Direct integration in the first labor market This type of training aims at integration through wage subsidies according to §49 AFG. Wage subsidies are paid for the employment of formerly long-term unemployed and are intended to decrease the competitive disadvantage of these recruits for the period of familiarisation with the skill requirement of the job. Individuals receive only practical guidance for the employment according to the requirements of the firm and are not provided certifiable qualifications. (f) Career advancement subsidy This type of treatment provides training for individuals who are not unemployed or threatened by unemployment, either as a retraining or as a career advancement in a practised profession. This type of training terminates 94. “Qualification for career advancement” works by providing loans to participants. Although not strictly active labor market policy, career advancement was an important part of public sector sponsored further training in the early 90’s (and before). In this treatment, participants are enabled to obtain an advanced formal degree in their profession above the level of a qualified occupational training (e.g. B.A. business administration). (g) Language training Besides further vocational training, language training is also part of the provision of further training in Germany as regulated by the AFG. The encouragement in participation in courses in German is intended to integrate asylum seekers, displaced persons, ethnic Germans and refugees into the labor market. Participants are provided support for an adequate education in language skills to fulfill regular employment.
374
B. Fitzenberger, S. Speckesser
References Abadie Imbens G. (2004) On the failure of the bootstrap for matching estimators. Unpublished Discussion Paper, Harvard University and UC Berkeley Abbring J., van den Berg GJ (2003) The nonparametric identification of treatment effects in duration models. Econometrica 71:1491–1517 Abbring J., van den Berg GJ (2004) Social experiments and instrumental variables with duration outcomes. Unpublished Manuscript, Free University Amsterdam and Tinbergen Institute Bender S, Bergemann A, Fitzenberger B, Lechner M, Miquel R, Speckesser S, Wunsch C (2005) Uber die Wirksamkeit von Fortbildungs- und Umschulungsmaßnahmen. Beiträge zur Arbeitsmarkt- und Berufsforschung, IAB, Nürnberg Bender S, Klose C (2000) Berufliche Weiterbildung für Arbeitslose – ein Weg zurück in die Beschäftigung? Analyse einer Abgängerkohorte des Jahres 1986 aus Maßnahmen der Fortbildung und Umschulung mit der ergänzten IAB–Beschäftigtenstichprobe 1975-1990. Mitteilungen aus der Arbeitsmarkt– und Berufsforschung 33(3):421–444 Bender S, Haas A, Klose C (2000) IAB employment subsample 1975–1995. Schmollers Jahrbuch 120:649–662 Bergemann A, Fitzenberger B, Speckesser S (2004) Evaluating the dynamic employment effects of training programs in East Germany using conditional difference-in-differences. ZEW Discussion Paper, Mannheim Blaschke D, Nagel E (1995) Beschäftigungssituation von Teilnehmern an AFG-finanzierter beruflicher Weiterbildung. Mitteilungen aus der Arbeitsmarkt- und Berufsforschung 28(2): 195–213 BLK ([Bund/Länder/Kommission für Bildungsplanung und Forschungsförderung] 2000) Erstausbildung und Weiterbildung, Materialien zur Bildungsplanung und Forschungsförderung (83) Bonn: BLK Bundesanstalt für Arbeit (1993, 1997, 2001) Berufliche Weiterbildung. Nürnberg: Bundesanstalt für Arbeit (various issues) Bundesanstalt fuer Arbeit (2003) Geschäftsbericht 2002, Nürnberg: Bundesanstalt für Arbeit. Einundfünfzigster Geschäftsbericht der Bundesanstalt für Arbeit Bundesministerium für Bildung und Forschung (2003) Berichtssystem Weiterbildung VII, Integrierter Gesamtbericht zur Weiterbildung in Deutschland, Berlin/Bonn: BMBF Fitzenberger B, Prey H (2000) Evaluating public sector sponsored training in East Germany. Oxford Econ Papers 52:497–520 Fitzenberger B, Speckesser S (2002) Weiterbildungsmaßnahmen in Ostdeutschland. Ein Misserfolg der Arbeitsmarktpolitik? In: Schmähl W (Hg) Wechselwirkungen zwischen Arbeitsmarkt und sozialer Sicherung. Schriftenreihe des Vereins für Socialpolitik, Duncker und Humblodt Fredriksson P, Johansson P (2003) Program evaluation and random program starts. Institute for Labour Market Policy Evaluation (IFAU), Uppsala, Working Paper, 2003:1 Fredriksson P, Johansson P (2004) Dynamic treatment assignment—the consequences for evaluations using observational data. IZA Discussion Paper, 1062 Heckman J, Ichimura H, Todd P (1998b) Matching as an econometric evaluation estimator. Rev Econ Stud 65:261–294 Heckman J, Ichimura H, Smith JA Todd P (1998b) Characterizing selection bias using experimental data. Econometrica 65:1017–1098 Heckman J, LaLonde RJ, Smith JA (1999) The economics and econometrics of active labor market programs. In: Ashenfelter O, Card D (eds) Handbook of labor economics, Vol. 3 A. Elsevier Science, Amsterdam; pp 1865–2097 Hujer R, Wellner M (1999) The effects of public sector sponsored training on unemployment and employment duration in East Germany. Discussion paper, Goethe University, Frankfurt, Ichimura H, Linton O (2001) Asymptotic expansions for some semiparametric program evaluation estimators. Discussion paper, London School of Economics and University College London Imbens G (2000) The role of the propensity score in estimating dose–response functions. Biometrika 87:706–710 Lechner M (1999) Earnings and employment effects of continuous off-the-job training in East Germany after unification. J Business and Econ Stat 17:74–90 Lechner M (2001) Identification and estimation of causal effects of multiple treatments under the conditional independence assumption. In: Lechner M, Pfeifer F (eds) (2000), Econometric evaluation of active labor market politics in Europe. Physica–Verlag, Heidelberg
Professional skills and techniques in Germany
375
Lechner M (2004) Sequential matching estimation of dynamic causal models. Discussion Paper 2004–06, University of St. Gallen Lechner M, Miquel R (2001) A potential outcome approach to dynamic program evaluation—Part I: Identification. Discussion Paper 2001–07, SIAW, University of St. Gallen Lechner M, Miquel R, Wunsch C (2005a) Long-Run effects of public sector sponsored training in West Germany. IZA Discussion Paper No. 1443 Lechner M, Miquel R, Wunsch C (2005b) The curse and blessing of training the unemployed in a changing economy: The case of East Germany after Unification. Discussion Paper, University of St. Gallen Pagan A, Ullah A (1999) Nonparametric Econometrics. Cambridge University Press, Cambridge Rosenbaum PR, Rubin DB (1983) The central role of the propensity score in observational studies for causal effects. Biometrika 70:41–55 Roy AD (1951) Some thoughts on the distribution of earnings. Oxford Econ Papers 3:135–146 Rubin DB (1974) Estimating causal effects of treatments in randomized and nonrandomized studies. J Educ Psychol 66:688–701 Sianesi B (2004) An evaluation of the swedish system of active labor market programs in the 1990s. Rev Econ Stat 86(1):133–155 Speckesser S (2004) Essays on evaluation of active labour market policy. Ph.D. Dissertation, Department of Economics, University of Mannheim Wunsch C (2006) Labour market policy in Germany: institutions, instruments and reforms since unification. Discussion Paper, University of St. Gallen