CONTENTS HENRIK JACOBSEN KLEVEN, MARTIN B. KNUDSEN, CLAUS THUSTRUP KREINER, SØREN PEDERSEN, AND EMMANUEL SAEZ: Unwilling or Unable to Cheat? Evidence From a Tax
Audit Experiment in Denmark . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ERIC FRENCH AND JOHN BAILEY JONES: The Effects of Health Insurance and SelfInsurance on Retirement Behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . XAVIER GABAIX: The Granular Origins of Aggregate Fluctuations . . . . . . . . . . . . . . . . . . . . . EDUARDO FAINGOLD AND YULIY SANNIKOV: Reputation in Continuous-Time Games . . . . . KANDORI, MICHIHIRO: Weakly Belief-Free Equilibria in Repeated Games With Private Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . JACOB K. GOEREE AND LEEAT YARIV: An Experimental Study of Collective Deliberation MARKUS BRÜCKNER AND ANTONIO CICCONE: Rain and the Democratic Window of Opportunity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
651 693 733 773 877 893 923
NOTES AND COMMENTS: AZEEM M. SHAIKH AND EDWARD J. VYTLACIL:
Partial Identification in Triangular Systems of Equations With Binary Dependent Variables . . . . . . . . . . . . . . . . . . .
949
ANNOUNCEMENTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . FORTHCOMING PAPERS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2010 ELECTION OF FELLOWS TO THE ECONOMETRIC SOCIETY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
957 961 963
VOL. 79, NO. 3 — May, 2011
An International Society for the Advancement of Economic Theory in its Relation to Statistics and Mathematics Founded December 29, 1930 Website: www.econometricsociety.org EDITOR STEPHEN MORRIS, Dept. of Economics, Princeton University, Fisher Hall, Prospect Avenue, Princeton, NJ 08544-1021, U.S.A.;
[email protected] MANAGING EDITOR GERI MATTSON, 2002 Holly Neck Road, Baltimore, MD 21221, U.S.A.; mattsonpublishingservices@ comcast.net CO-EDITORS DARON ACEMOGLU, Dept. of Economics, MIT, E52-380B, 50 Memorial Drive, Cambridge, MA 021421347, U.S.A.;
[email protected] PHILIPPE JEHIEL, Dept. of Economics, Paris School of Economics, 48 Bd Jourdan, 75014 Paris, France; University College London, U.K.;
[email protected] WOLFGANG PESENDORFER, Dept. of Economics, Princeton University, Fisher Hall, Prospect Avenue, Princeton, NJ 08544-1021, U.S.A.;
[email protected] JEAN-MARC ROBIN, Dept. of Economics, Sciences Po, 28 rue des Saints Pères, 75007 Paris, France and University College London, U.K.;
[email protected] JAMES H. STOCK, Dept. of Economics, Harvard University, Littauer M-26, 1830 Cambridge Street, Cambridge, MA 02138, U.S.A.;
[email protected] ASSOCIATE EDITORS YACINE AÏT-SAHALIA, Princeton University JOSEPH G. ALTONJI, Yale University JAMES ANDREONI, University of California, San Diego JUSHAN BAI, Columbia University MARCO BATTAGLINI, Princeton University PIERPAOLO BATTIGALLI, Università Bocconi DIRK BERGEMANN, Yale University YEON-KOO CHE, Columbia University XIAOHONG CHEN, Yale University VICTOR CHERNOZHUKOV, Massachusetts Institute of Technology J. DARRELL DUFFIE, Stanford University JEFFREY ELY, Northwestern University HALUK ERGIN, Duke University JIANQING FAN, Princeton University MIKHAIL GOLOSOV, Yale University FARUK GUL, Princeton University JINYONG HAHN, University of California, Los Angeles PHILIP A. HAILE, Yale University JOHANNES HORNER, Yale University MICHAEL JANSSON, University of California, Berkeley PER KRUSELL, Stockholm University FELIX KUBLER, University of Zurich OLIVER LINTON, London School of Economics BART LIPMAN, Boston University
THIERRY MAGNAC, Toulouse School of Economics (GREMAQ and IDEI) DAVID MARTIMORT, IDEI-GREMAQ, Université des Sciences Sociales de Toulouse, Paris School of Economics STEVEN A. MATTHEWS, University of Pennsylvania ROSA L. MATZKIN, University of California, Los Angeles SUJOY MUKERJI, University of Oxford LEE OHANIAN, University of California, Los Angeles WOJCIECH OLSZEWSKI, Northwestern University NICOLA PERSICO, New York University JORIS PINKSE, Pennsylvania State University BENJAMIN POLAK, Yale University PHILIP J. RENY, University of Chicago SUSANNE M. SCHENNACH, University of Chicago ANDREW SCHOTTER, New York University NEIL SHEPHARD, University of Oxford MARCIANO SINISCALCHI, Northwestern University JEROEN M. SWINKELS, Northwestern University ELIE TAMER, Northwestern University EDWARD J. VYTLACIL, Yale University IVÁN WERNING, Massachusetts Institute of Technology ASHER WOLINSKY, Northwestern University
EDITORIAL ASSISTANT: MARY BETH BELLANDO, Dept. of Economics, Princeton University, Fisher Hall, Princeton, NJ 08544-1021, U.S.A.;
[email protected] Information on MANUSCRIPT SUBMISSION is provided in the last two pages. Information on MEMBERSHIP, SUBSCRIPTIONS, AND CLAIMS is provided in the inside back cover.
SUBMISSION OF MANUSCRIPTS TO ECONOMETRICA 1. Members of the Econometric Society may submit papers to Econometrica electronically in pdf format according to the guidelines at the Society’s website: http://www.econometricsociety.org/submissions.asp Only electronic submissions will be accepted. In exceptional cases for those who are unable to submit electronic files in pdf format, one copy of a paper prepared according to the guidelines at the website above can be submitted, with a cover letter, by mail addressed to Professor Stephen Morris, Dept. of Economics, Princeton University, Fisher Hall, Prospect Avenue, Princeton, NJ 085441021, USA. 2. There is no charge for submission to Econometrica, but only members of the Econometric Society may submit papers for consideration. In the case of coauthored manuscripts, at least one author must be a member of the Econometric Society for the current calendar year. Note that Econometrica rejects a substantial number of submissions without consulting outside referees. 3. It is a condition of publication in Econometrica that copyright of any published article be transferred to the Econometric Society. Submission of a paper will be taken to imply that the author agrees that copyright of the material will be transferred to the Econometric Society if and when the article is accepted for publication, and that the contents of the paper represent original and unpublished work that has not been submitted for publication elsewhere. If the author has submitted related work elsewhere, or if he does so during the term in which Econometrica is considering the manuscript, then it is the author’s responsibility to provide Econometrica with details. There is no page fee and no payment made to the authors. 4. Econometrica has the policy that all results (empirical, experimental and computational) must be replicable. 5. Current information on turnaround times is available is published in the Editor’s Annual Report in the January issue of the journal. They are re-produced on the journal’s website at http:// www.econometricsociety.org/editorsreports.asp. 6. Papers should be accompanied by an abstract of no more than 150 words that is full enough to convey the main results of the paper. 7. Additional information on submitting papers is available on the journal’s website at http:// www.econometricsociety.org/submissions.asp. Typeset at VTEX, Akademijos Str. 4, 08412 Vilnius, Lithuania. Printed at The Sheridan Press, 450 Fame Avenue, Hanover, PA 17331, USA. Copyright ©2011 by The Econometric Society (ISSN 0012-9682). Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or direct commercial advantage and that copies show this notice on the first page or initial screen of a display along with the full citation, including the name of the author. Copyrights for components of this work owned by others than the Econometric Society must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, to redistribute to lists, or to use any component of this work in other works, requires prior specific permission and/or a fee. Posting of an article on the author’s own website is allowed subject to the inclusion of a copyright statement; the text of this statement can be downloaded from the copyright page on the website www.econometricsociety.org/permis.asp. Any other permission requests or questions should be addressed to Claire Sashi, General Manager, The Econometric Society, Dept. of Economics, New York University, 19 West 4th Street, New York, NY 10012, USA. E-mail:
[email protected]. Econometrica (ISSN 0012-9682) is published bi-monthly by the Econometric Society, Department of Economics, New York University, 19 West 4th Street, New York, NY 10012. Mailing agent: Sheridan Press, 450 Fame Avenue, Hanover, PA 17331. Periodicals postage paid at New York, NY and additional mailing offices. U.S. POSTMASTER: Send all address changes to Econometrica, Journals Department, John Wiley & Sons Inc., 350 Main Street, Malden, MA 02148, USA.
An International Society for the Advancement of Economic Theory in its Relation to Statistics and Mathematics Founded December 29, 1930 Website: www.econometricsociety.org Membership Joining the Econometric Society, and paying by credit card the corresponding membership rate, can be done online at www.econometricsociety.org. Memberships are accepted on a calendar year basis, but the Society welcomes new members at any time of the year, and in the case of print subscriptions will promptly send all issues published earlier in the same calendar year. Membership Benefits • Possibility to submit papers to Econometrica, Quantitative Economics, and Theoretical Economics • Possibility to submit papers to Econometric Society Regional Meetings and World Congresses • Full text online access to all published issues of Econometrica (Quantitative Economics and Theoretical Economics are open access) • Full text online access to papers forthcoming in Econometrica (Quantitative Economics and Theoretical Economics are open access) • Free online access to Econometric Society Monographs, including the volumes of World Congress invited lectures • Possibility to apply for travel grants for Econometric Society World Congresses • 40% discount on all Econometric Society Monographs • 20% discount on all John Wiley & Sons publications • For print subscribers, hard copies of Econometrica, Quantitative Economics, and Theoretical Economics for the corresponding calendar year Membership Rates Membership rates depend on the type of member (ordinary or student), the class of subscription (print and online or online only) and the country of residence (high income or middle and low income). The rates for 2011 are the following:
Ordinary Members Print and Online Online only Print and Online Online only
1 year (2011) 1 year (2011) 3 years (2011–2013) 3 years (2011–2013)
Student Members Print and Online Online only
1 year (2011) 1 year (2011)
High Income
Other Countries
$100 / €80 / £65 $55 / €45 / £35 $240 / €192 / £156 $132 / €108 / £84
$60 / €48 $15 / €12 $144 / €115 $36 / €30
$60 / €48 / £40 $15 / €12 / £10
$60 / €48 $15 / €12
Euro rates are for members in Euro area countries only. Sterling rates are for members in the UK only. All other members pay the US dollar rate. Countries classified as high income by the World Bank are: Andorra, Aruba, Australia, Austria, The Bahamas, Bahrain, Barbados, Belgium, Bermuda, Brunei Darussalam, Canada, Cayman Islands, Channel Islands, Croatia, Cyprus, Czech Republic, Denmark, Equatorial Guinea, Estonia, Faeroe Islands, Finland, France, French Polynesia, Germany, Gibraltar, Greece, Greenland, Guam, Hong Kong (China), Hungary, Iceland, Ireland, Isle of Man, Israel, Italy, Japan, Rep. of Korea, Kuwait, Latvia, Liechtenstein, Luxembourg, Macao (China), Malta, Monaco, Netherlands, Netherlands Antilles, New Caledonia, New Zealand, Northern Mariana Islands, Norway, Oman, Poland, Portugal, Puerto Rico, Qatar, San Marino, Saudi Arabia, Singapore, Slovak Republic, Slovenia, Spain, Sweden, Switzerland, Taiwan (China), Trinidad and Tobago, Turks and Caicos Islands, United Arab Emirates, United Kingdom, United States, Virgin Islands (US). Institutional Subscriptions Information on Econometrica subscription rates for libraries and other institutions is available at www.econometricsociety.org. Subscription rates depend on the class of subscription (print and online or online only) and the country classification (high income, middle income, or low income). Back Issues and Claims For back issues and claims contact Wiley Blackwell at
[email protected].
An International Society for the Advancement of Economic Theory in its Relation to Statistics and Mathematics Founded December 29, 1930 Website: www.econometricsociety.org Administrative Office: Department of Economics, New York University, 19 West 4th Street, New York, NY 10012, USA; Tel. 212-9983820; Fax 212-9954487 General Manager: Claire Sashi (
[email protected]) 2011 OFFICERS BENGT HOLMSTRÖM, Massachusetts Institute of Technology, PRESIDENT JEAN-CHARLES ROCHET, University of Zurich, FIRST VICE-PRESIDENT JAMES HECKMAN, University of Chicago, SECOND VICE-PRESIDENT JOHN MOORE, University of Edinburgh and London School of Economics, PAST PRESIDENT RAFAEL REPULLO, CEMFI, EXECUTIVE VICE-PRESIDENT
2011 COUNCIL DARON ACEMOGLU, Massachusetts Institute of Technology (*)MANUEL ARELLANO, CEMFI ORAZIO ATTANASIO, University College London MARTIN BROWNING, University of Oxford DAVID CARD, University of California, Berkeley JACQUES CRÉMER, Toulouse School of Economics MATHIAS DEWATRIPONT, Free University of Brussels DARRELL DUFFIE, Stanford University GLENN ELLISON, Massachusetts Institute of Technology HIDEHIKO ICHIMURA, University of Tokyo (*)MATTHEW O. JACKSON, Stanford University MICHAEL P. KEANE, University of Technology Sydney LAWRENCE J. LAU, Chinese University of Hong Kong CHARLES MANSKI, Northwestern University CESAR MARTINELLI, ITAM
ANDREU MAS-COLELL, Universitat Pompeu Fabra and Barcelona GSE AKIHIKO MATSUI, University of Tokyo HITOSHI MATSUSHIMA, University of Tokyo ROSA MATZKIN, University of California, Los Angeles ANDREW MCLENNAN, University of Queensland COSTAS MEGHIR, University College London and Yale University MARGARET MEYER, University of Oxford STEPHEN MORRIS, Princeton University JUAN PABLO NICOLINI, Universidad Torcuato di Tella (*)ROBERT PORTER, Northwestern University JEAN-MARC ROBIN, Sciences Po and University College London LARRY SAMUELSON, Yale University ARUNAVA SEN, Indian Statistical Institute JÖRGEN W. WEIBULL, Stockholm School of Economics
The Executive Committee consists of the Officers, the Editors of Econometrica (Stephen Morris), Quantitative Economics (Orazio Attanasio), and Theoretical Economics (Martin J. Osborne), and the starred (*) members of the Council.
REGIONAL STANDING COMMITTEES Australasia: Andrew McLennan, University of Queensland, CHAIR; Maxwell L. King, Monash University, SECRETARY. Europe and Other Areas: Jean-Charles Rochet, University of Zurich, CHAIR; Helmut Bester, Free University Berlin, SECRETARY; Enrique Sentana, CEMFI, TREASURER. Far East: Hidehiko Ichimura, University of Tokyo, CHAIR. Latin America: Juan Pablo Nicolini, Universidad Torcuato di Tella, CHAIR; Juan Dubra, University of Montevideo, SECRETARY. North America: Bengt Holmström, Massachusetts Institute of Technology, CHAIR; Claire Sashi, New York University, SECRETARY. South and Southeast Asia: Arunava Sen, Indian Statistical Institute, CHAIR.
Econometrica, Vol. 79, No. 3 (May, 2011), 651–692
UNWILLING OR UNABLE TO CHEAT? EVIDENCE FROM A TAX AUDIT EXPERIMENT IN DENMARK BY HENRIK JACOBSEN KLEVEN, MARTIN B. KNUDSEN, CLAUS THUSTRUP KREINER, SØREN PEDERSEN, AND EMMANUEL SAEZ1 This paper analyzes a tax enforcement field experiment in Denmark. In the base year, a stratified and representative sample of over 40,000 individual income tax filers was selected for the experiment. Half of the tax filers were randomly selected to be thoroughly audited, while the rest were deliberately not audited. The following year, threat-of-audit letters were randomly assigned and sent to tax filers in both groups. We present three main empirical findings. First, using baseline audit data, we find that the tax evasion rate is close to zero for income subject to third-party reporting, but substantial for self-reported income. Since most income is subject to third-party reporting, the overall evasion rate is modest. Second, using quasi-experimental variation created by large kinks in the income tax schedule, we find that marginal tax rates have a positive impact on tax evasion for self-reported income, but that this effect is small in comparison to legal avoidance and behavioral responses. Third, using the randomization of enforcement, we find that prior audits and threat-of-audit letters have significant effects on self-reported income, but no effect on third-party reported income. All these empirical results can be explained by extending the standard model of (rational) tax evasion to allow for the key distinction between self-reported and third-party reported income. KEYWORDS: Tax evasion, field experiment, tax enforcement.
1. INTRODUCTION AN EXTENSIVE LITERATURE has studied tax evasion and tax enforcement from both the theoretical and empirical perspective. The theoretical literature builds on the Allingham and Sandmo (1972) model in which taxpayers report income to the tax authorities to maximize expected utility taking into account a probability of audit and a penalty for cheating. Under low audit probabilities and low penalties, the expected return to evasion is high and the model predicts substantial noncompliance. This prediction is in stark contrast with the observation that compliance levels are high in modern tax systems despite low audit rates and fairly modest penalties.2 This suggests that the standard economic 1 We thank a co-editor, Alan Auerbach, Oriana Bandiera, Richard Blundell, Raj Chetty, John Friedman, William Gentry, Kåre P. Hagen, Wojciech Kopczuk, Monica Singhal, Joel Slemrod, four anonymous referees, and numerous seminar and conference participants for constructive comments and discussions. We are also thankful to Jakob Egholt Søgaard for outstanding research assistance. Financial support from ESRC Grant RES-000-22-3241, NSF Grant SES0850631, and a grant from the Economic Policy Research Network (EPRN) is gratefully acknowledged. The responsibility for all interpretations and conclusions expressed in this paper lies solely with the authors and does not necessarily represent the views of the Danish tax administration (SKAT) or the Danish government. 2 For example, Andreoni, Erard, and Feinstein (1998) concluded at the end of their influential survey that “the most significant discrepancy that has been documented between the standard
© 2011 The Econometric Society
DOI: 10.3982/ECTA9113
652
KLEVEN ET AL.
model misses important aspects of the real-world reporting environment. In particular, many have argued that observed compliance levels can only be explained by psychological or cultural aspects of tax compliance such as social norms, tax morale, patriotism, guilt, and shame (e.g., Andreoni, Erard, and Feinstein (1998)). In other words, taxpayers, despite being able to cheat, are unwilling to do so for noneconomic reasons. While psychology and culture may be important in the decision to evade taxes, the standard economic model deviates from the real world in another potentially important aspect: it focuses on a situation with pure self-reporting. By contrast, all advanced economies make extensive use of third-party information reporting whereby institutions such as employers, banks, investment funds, and pension funds report taxable income earned by individuals (employees or clients) directly to the government. Under third-party reporting, the observed audit rate is a poor proxy for the probability of detection faced by a taxpayer contemplating to engage in tax evasion, because systematic matching of information reports to income tax returns will uncover any discrepancy between the two (Sandmo (2005); Slemrod (2007)). Thus, taxpayers with only third-party reported income may be unable to cheat on their taxes. Indeed, the U.S. Taxpayer Compliance Measurement Program (TCMP) has documented that aggregate compliance is much higher for income categories with substantial information reporting than for income categories with little or no information reporting (Internal Revenue Service (1996, 2006)). In this study, we first extend the standard economic model of tax evasion to account for the fact that the probability of detection is endogenous to the type of income being underreported (third-party reported versus self-reported income). The model predicts that evasion will be very low for third-party reported income, but substantial for self-reported income. It also predicts that the effects of tax enforcement (audits, penalties) and tax policy (marginal tax rates) on evasion will be larger for self-reported income than for third-party reported income. Second, we provide a comprehensive empirical test of these predictions based on a large field experiment carried out in collaboration with the Danish tax collection agency (SKAT). The experiment imposes different audit regimes on randomly selected taxpayers, and has been designed to provide evidence on the size of evasion as well as the response of evasion to tax enforcement and tax rates under different information environments (thirdparty reporting versus self-reporting). Unlike previous work such as the U.S. TCMP studies, our data allow us to distinguish precisely between income items subject to third-party reporting and income items subject to self-reporting for each individual in the sample, and to measure treatment effects on those two forms of income separately. economic model of compliance and real-world compliance behavior is that the theoretical model greatly overpredicts noncompliance.”
UNWILLING OR UNABLE TO CHEAT?
653
The experiment was implemented on a stratified random sample of about 42,800 individual taxpayers during the filing and auditing seasons of 2007 and 2008. In the first stage, taxpayers were randomly selected for unannounced audits of tax returns filed in 2007. These audits were comprehensive and any detected misreporting was corrected and penalized according to Danish law. The selected taxpayers were not aware that the audits were part of a special study. For taxpayers not selected for these audits, tax returns were not examined under any circumstances. In the second stage, employees in both the audit and no-audit groups were randomly selected for pre-announced audits of tax returns filed in 2008. One group of taxpayers received a letter telling them that their return would certainly be audited, another group received a letter telling them that half of everyone in their group would be audited, while a third group received no letter. The second stage therefore provides exogenous variation in the probability of being audited. The empirical analysis is divided into three main parts. The first part studies the anatomy of tax compliance using the baseline audit data. While the overall tax evasion uncovered by audits constitutes a modest share of total income, there is considerable variation in tax evasion rates across income items depending on the information environment. The tax evasion rate for third-party reported income is close to zero, whereas the tax evasion rate for self-reported income is substantial. Across different taxpayers, we find that individuals who earn mostly self-reported income and display substantial noncompliance overall still do not underreport their third-party reported income, while individuals who earn mostly third-party reported income and display very little noncompliance overall often fully evade taxes on their self-reported income. These findings are consistent with the theoretical model and suggest that the high degree of compliance is driven by the widespread use of information reporting rather than an intrinsic aversion to cheating. We also study the impact of social and cultural variables on compliance. Although some of these variables are correlated with tax evasion, their impact is very small in comparison to variables that capture information and incentives, namely the presence and size of self-reported income or losses. Taken together, our findings suggest that tax evasion is low, not because taxpayers are unwilling to cheat, but because they are unable to cheat successfully due to the widespread use of third-party reporting. The second part estimates the effect of the marginal tax rate on evasion using quasi-experimental variation in tax rates created by large and salient kinks in the nonlinear income tax schedule. The effect of marginal tax rates on evasion is theoretically ambiguous, and existing empirical results have been very sensitive to specification due to data and identification problems. As showed by Saez (2010), the compensated elasticity of reported income with respect to the marginal tax rate can be identified from bunching around kinks in progressive tax schedules. Unlike existing bunching studies, our data allow us to compare
654
KLEVEN ET AL.
bunching in pre-audit and post-audit incomes so as to separately identify compensated elasticities of illegal evasion versus legal avoidance. We find that evasion elasticities for self-reported income are positive but small relative to the total elasticity. This implies that marginal tax rates have only modest effects on tax evasion that are dwarfed by the third-party reporting effects obtained in part one. The third part studies the effect of tax enforcement on evasion using the randomization of audits and audit threats. First, we estimate the effect of audits on future reported income by comparing the audit and no-audit groups in the following year. Past audits may affect reported income by changing the perceived probability of detection. Consistent with our theoretical model, we find that audits have a strong positive impact on reported income in the following year, with the effect driven entirely by self-reported income. Second, we estimate the effect of the probability of audit on reported income by comparing the threat-of-audit letter and no-letter groups. Because taxpayers received the letters shortly after receiving a prepopulated return containing third-party information, we focus on the effect of letters on self-reported adjustments to the prepopulated return. Consistent with the predictions of the model, we find that audit threats have a positive impact on self-reported income and that the effects are stronger for the 100% threat than for the 50% threat. Our paper contributes to a large body of empirical work studying the size and determinants of tax evasion, including the effect of tax rates, prior audits, audit probabilities, penalties, and socioeconomic variables.3 Most of the literature relies on observational and nonexperimental data, which is associated with important measurement and identification problems, or on laboratory experiments that do not capture central aspects of the real-world reporting environment such as the presence of third-party reporting. An important exception in the literature is Slemrod, Blumenthal, and Christian (2001), who analyze the effects of threat-of-audit letters in a small field experiment in Minnesota, and upon which the last part of our analysis is built. The paper is organized as follows. Section 2 presents an economic model of tax evasion with third-party reporting. Section 3 describes the context, experimental design, and data. Section 4 analyzes the anatomy of tax compliance. Section 5 estimates the effect of the marginal tax rate on evasion. Section 6 estimates the effects of tax enforcement on evasion. Section 7 concludes. 2. A SIMPLE ECONOMIC MODEL OF TAX EVASION We consider a version of the Allingham–Sandmo (henceforth AS) model with risk-neutral taxpayers and an endogenous audit probability that depends 3 Andreoni, Erard, and Feinstein (1998) and Slemrod and Yitzhaki (2002) provided extensive surveys. An earlier version of this paper (Kleven, Knudsen, Kreiner, Pedersen, and Saez (2010)) also provides a more thorough review of the literature.
UNWILLING OR UNABLE TO CHEAT?
655
on reported income.4 The basic model is similar to models considered in the literature, but we present the condition determining tax evasion in a different manner to demonstrate that a high degree of tax compliance is potentially consistent with a low audit probability and a low, or even zero, penalty for evasion. We then introduce third-party reporting into the model and discuss its implications for the structure of the (endogenous) audit probability and tax compliance behavior. Notice that the assumption of risk neutrality, besides simplifying the analysis, makes our case harder because risk-neutral taxpayers are more inclined to evade taxes than risk-averse taxpayers. ¯ reported income y, and undeWe consider a taxpayer with true income y, clared income e ≡ y¯ − y. Let p be the probability that the government detects undeclared income. We can think of the detection probability as a product of the probability of audit and the probability of detection conditional on audit.5 The distinction between these two probabilities is implicit in the model, but becomes relevant in the interpretation of the empirical findings from the randomized experiment. We assume that the probability of detection is an increasing function of undeclared income, p = p(e), where p (e) > 0. That is, the more the individual evades, the more likely is the tax administration to suspect underreporting and to carry out an audit. When evasion is detected, the taxpayer is forced to pay the evaded tax plus a penalty. The tax is proportional to income with rate τ, and the penalty is proportional to the evaded tax and, is given by θ. The risk-neutral taxpayer maximizes expected net-of-tax income, that is, (1)
¯ − τ) + τe] + p(e) · [y(1 ¯ − τ) − θτe] u = (1 − p(e)) · [y(1
An interior optimum for e satisfies the first-order condition du/de = 0, which can be written as (2)
[p(e) + p (e) · e](1 + θ) = 1
The second-order condition to this problem puts a restriction on the secondorder derivative of p(e).6 We can define the elasticity of the detection probability with respect to evasion as ε ≡ p (e)e/p ≥ 0. The first-order condition that determinines tax evasion can then be written as (3)
p(e) · (1 + θ) · (1 + ε(e)) = 1
4 A number of previous studies have considered an endogenous audit probability, including the original paper by Allingham and Sandmo (1972), Yitzhaki (1987), Slemrod and Yitzhaki (2002), and Sandmo (2005). 5 For expositional simplicity, we make the assumption that a tax audit either uncovers everything or nothing; there is no middle ground where tax evasion is partially uncovered. 6 The second-order condition is given by −2p (e) − p (e) · e < 0. A sufficient condition for this to hold is that p(·) is convex so that p (e) > 0.
656
KLEVEN ET AL.
The right-hand side of this condition is the marginal benefit of an extra dollar of evasion, while the left-hand side is the expected marginal cost of an extra dollar of evasion. Under ε = 0 as in the standard model with fixed p, the expected marginal cost equals the probability of detection p times the evaded tax plus penalty, 1 + θ. The presence of the elasticity ε in the formula reflects that the taxpayer who evades an extra dollar incurs a higher probability of detection on all the inframarginal units of evasion. Interestingly, this simple model is consistent with less than full tax evasion even under a zero penalty, θ = 0. In this case, partial evasion may be better than full evasion because it involves a lower probability of being detected and having to pay the full statutory tax (but no penalty). The comparative statics of such a model have been analyzed in the literature (e.g., Yitzhaki (1987)). A higher penalty and a positive shift of the detection probability are both associated with lower tax evasion. Moreover, as can be seen directly from (3), the marginal tax rate has no impact on tax evasion. This result relies on the assumptions of risk neutrality, linear taxation, and a linear penalty in evaded tax. In particular, the combination of a linear penalty and linear taxation implies that the substitution effect of the marginal tax rate is zero, while risk neutrality implies that the income effect is also zero. Under a nonlinear penalty, the marginal tax rate will have a nonzero substitution effect with the sign of the effect depending on the second-order derivative of the fine. Moreover, in a nonlinear tax system, an increase in the marginal tax rate for a constant total tax liability can have a positive substitution effect on evasion, although this is true only under an endogenous audit probability and the result depends on the second-order derivative of the audit probability. In general, the substitution effect of the marginal tax rate on evasion is theoretically ambiguous and its sign is an open empirical question. The strongest critique of the economic model of tax evasion centers on its predictions of the level of noncompliance. In our model, the taxpayer should increase evasion as long as the left-hand side of equation (3) is below 1. The fact that the observed p and θ are close to zero is often argued to imply that it is privately optimal for taxpayers to increase evasion and that they are, therefore, complying too much from the perspective of the economic model. This reasoning ignores the role of ε(e), and this is particularly important in a tax system using third-party information reporting. As we will now argue, the presence of third-party reporting puts a specific structure on the functions p(e) and ε(e). Third-party reporting can be embedded in the model in the following way. Let true income be given by y¯ = y¯t + y¯s , where y¯t is subject to third-party reporting (wages and salaries, interest income, mortgage payments, etc.) and y¯s is self-reported (self-employment income, various deductions, etc.). For thirdparty reported income, assuming there is no collusion between the taxpayer and the third party, the probability of detection is close to 1 as systematic
UNWILLING OR UNABLE TO CHEAT?
657
FIGURE 1.——Probability of detection under third-party reporting.
matching of tax returns and information reports will uncover any evasion.7 By contrast, the detection probability for self-reported income is very low because there is no smoking gun for tax evasion and tax administrations have limited resources to carry out blind audits. Based on these observations, it is natural to assume that the probability of detection p(e) is very low for e < y¯s , very high for e > y¯s , and increases rapidly around e = y¯s . Notice that these properties rely on a specific sequence of un¯ the taxpayer first evades taxes derdeclaration: as tax evasion goes from 0 to y, on income items with a low detection probability and then evades taxes on items with a high detection probability. Given that the tax rate and penalty are the same across different income items, this is the optimal sequence for the taxpayer. This implies that the detection probability has an S shape like the one shown in Figure 1, where p(e) is initially very close to 0 and then decreases rapidly toward 1 around the threshold y¯s .8 In this model, the taxpayer’s optimum will be at a point to the left of y¯s 1 as shown in the figure. At this equilibrium, p(e) is much lower than 1+θ , but the elasticity ε(e) is very high as evasion is close to the level where third-party 7 Kleven, Kreiner, and Saez (2009) studied the issue of collusion and third-party reporting in detail, and demonstrated that collusion cannot be sustained in large formal firms even with low audit rates and penalties. 8 A microfoundation of the S shape in the figure would allow for many income items, some of which are third-party reported and some of which are self-reported. In general, let there be N third-party reported items with true incomes y¯t1 y¯tN , and let there be M self-reported items with true incomes y¯s1 y¯sM . The N third-party reported items have higher detection probabilities than the M self-reported items, but there is heterogeneity in the probability across items in each group. As argued above, an optimizing taxpayer choosing total tax evasion e will underdeclare income items sequentially such that the detection probability is increasing in total evasion. In this case, it is natural to assume that the detection probability has a shape like the one shown in Figure 1.
658
KLEVEN ET AL.
reporting starts. The taxpayer almost fully underdeclares self-reported income, while fully declaring third-party reported income. It is useful to briefly consider heterogeneous taxpayers as this will play a role in the empirical analysis. There is heterogeneity in the share of income that is third-party reported depending on self-employment, job type, wealth composition, and so forth. Hence, the threshold at y¯s in Figure 1 varies across tax¯ While the arguments above imply that tax evasion should payers for a given y. always be close to y¯s , in practice, taxpayers who derive most of their income in self-reported form cannot easily evade all their self-reported income. This is because total reported income after tax needs to be roughly consistent with consumption and change in wealth, which can be partially ascertained by the government using information from financial institutions, credit cards records, and so forth. This can be seen as additional third-party information that can be obtained by the tax authorities if total disposable income appears unrealistically low.9 This information matters for those with mostly self-reported income (e.g., self-employed individuals), but not for those with mostly third-party income (e.g., wage earners with small additional amounts of self-reported income). This leads to the prediction that those with little self-reported income should almost fully evade self-reported income, while those with substantial self-reported income should evade less as a share of self-reported income (but evade more in total). Besides these predictions about the level of tax evasion across different income items and taxpayers, the model also predicts that the deterrence effect of enforcement will depend on the information environment. The deterrence effect for self-reported income should be significant and consistent with the standard comparative statics discussed above, whereas there should be no effect on third-party income. In the following sections, we present a comprehensive test of the model predictions with respect to compliance levels and deterrence effects under different information environments. 3. CONTEXT, EXPERIMENTAL DESIGN, AND DATA 3.1. The Danish Income Tax and Enforcement System The Danish income tax system is described in Table I. Panel A describes the different tax bases and panel B describes the tax rate structure. The system combines national and local taxes that are enforced and administered in an integrated system. Labor income first faces a national payroll tax imposed at a flat rate of 8%. This tax is deducted when computing all other taxes, so 9
As we describe in Section 3, tax audits do indeed compare disposable reported income to estimates of consumption and wealth changes using information from banks and other financial institutions.
UNWILLING OR UNABLE TO CHEAT?
659
that the effective labor income tax equals the payroll tax plus 92% of the other taxes. The national income tax is a progressive three-bracket system imposed on a tax base equal to personal income (labor income, transfers, pensions, and other adjustments) plus net capital income (if it is positive) with marginal tax rates equal to 5.5%, 11.5%, and 26.5%. The local income tax is imposed on taxable income (personal income plus net capital income minus deductions) above a standard exemption at a flat rate that varies by municipality and is equal to 32.6% on average.10 Finally, at the national level, stock income (dividends and capital gains) is taxed separately by a progressive two-bracket system with rates equal to 28% and 43%. About 88% of the Danish population is liable to pay income tax, and all tax liable individuals are required to file a return.11 Income tax filing occurs in the spring of year t + 1 for income earned in year t. By the end of January in year t + 1, SKAT will have received most information reports from third parties. Based on the third-party reports, SKAT constructs prepopulated tax returns that are sent to taxpayers in mid-March. Other than third-party information, the prepopulated return may contain additional hard information that SKAT possesses such as an estimated commuting allowance based on knowledge of the taxpayer’s residence and work addresses.12 Upon receiving the prepopulated return, the taxpayer has the option to make adjustments and submit a final return before May 1.13 This filing system implies that, for most tax filers, the difference between income items on the final return and the prepopulated return is a measure of item-by-item self-reported income. After each tax return has been filed, audit flags are generated based on the characteristics of the return. Audit flags do not involve any randomness, but are a deterministic function of the computerized tax information available to SKAT. Flagged returns are looked at by a tax examiner, who decides whether or not to instigate an audit based on the severity of flags, local knowledge, and resources. The audit-flag rate for the entire population of individual tax filers is 4.2%. Audits may generate adjustments to the final return and a tax correction. In the case of underreporting, the taxpayer has the option to pay taxes owed immediately or to postpone the payment at an interest. If the underreporting is seen as deliberate cheating, a fine may be imposed. In practice, fines 10 There is a ceiling on the combined local and national marginal tax rate of 59%. This ceiling is binding in the average municipality as 326% + 265% = 591%. Hence, in the average municipality, the top marginal tax rate on labor income (including the payroll tax) is equal to 8% + 092 · 59% = 623%. This is among the highest marginal tax rates in the world. 11 The group of citizens who are not tax liable and therefore not required to file a return consists mostly of children under the age of 16 who have not received any taxable income over the year. 12 Since Denmark, as the first country in the world introduced prepopulated returns in 1988, this policy has been introduced in several other European and South-American countries. 13 New returns can be submitted by phone, internet, or mail. The taxpayer may keep filing new returns all the way up to the deadline; only the last return counts. If no adjustments are made, the prepopulated return counts as the final return.
660
KLEVEN ET AL. TABLE I DANISH INDIVIDUAL INCOME TAX IN 2006
A. Income Concepts Income Concept
Definition
1. Labor income
Salary, wages, honoraria, fees, bonuses, fringe benefits, business earnings Labor income (1) + social transfers, grants, awards, gifts, received alimony − payroll tax, and certain pension contributions Interest income, rental income, business capital income − interest on debt (mortgage, bank loans, credit cards, student loans) Commuting costs, union fees, unemployment contributions, other work related expenditures, charitable contributions, alimony paid = Personal income (2) + capital income (3) − deductions (4) Dividends and realized capital gains from corporate stock
2. Personal income 3. Capital income 4. Deductions 5. Taxable income 6. Stock income B. Tax Rates and Tax Bases Tax Typea
Tax Base
Payroll tax
Labor income
National income tax
Personal income + max(capital income, 0) Taxable income Stock income
Regional income tax Stock income tax
Bracket (DKK)b
Tax Rate
All income 38,500–265,500 265,500–318,700 318,700– 38,500– 0–44,400 44,400–
80% 55% 115% 265%c 326%d 280% 430%
a The national and regional income taxes are based on individual income (not family income). The stock income tax is based on family income with brackets for married tax filers twice as large as those reported in the table. b All amounts are given in Danish kroner: U.S. $1 = 52 DKK as of January 2010. c The top rate is reduced so that the combined national and regional income top marginal tax rate never exceeds 59%. The top marginal tax rate on labor income including the payroll tax is therefore 008 + 092 ∗ 059 = 623%. d The regional tax includes municipal and county taxes in 2006. The rate shown is the average across all municipalities, and includes the optional church tax equal to 0.7%.
are rare because it is difficult to draw the line between honest mistakes and deliberate fraud. An audit may alternatively find overreporting, in which case excess taxes are repaid with interest. 3.2. Experimental Design The experiment is based on a stratified random sample of 25,020 employees and 17,764 self-employed.14 The sample of employees was stratified by tax re14 The “employee” category includes transfer recipients such as retired and unemployed individuals, and would therefore be more accurately described as “not self-employed.”
UNWILLING OR UNABLE TO CHEAT?
661
FIGURE 2.——Overview of experimental design.
turn complexity, with an over-sampling of filers with high-complexity returns.15 The experimental treatments and their timing are shown in Figure 2. The experiment was implemented by SKAT in two stages during the filing and auditing seasons of 2007 and 2008. In the first stage, taxpayers were randomly assigned to a 0% audit group or a 100% audit group. In the 0% audit group, taxpayers were never audited even when the characteristics of the return would normally have triggered an audit. In the 100% audit group, all taxpayers were subject to unannounced tax audits of tax returns filed in 2007 (for 2006 income), meaning that taxpayers were unaware at the time of filing that they had been selected for an audit.16 The tax audits in the 100% audit group were comprehensive and examined every item on the tax return using various verification procedures. Some items were checked by matching the return to administrative register data (e.g., deductions for paid alimony can be matched to received alimony of the ex-spouse, 15 An additional stratification ensured that the same number of taxpayers was selected from each of the regional tax collection agencies located around the country. 16 The actual audit rate in the 100% audit group was slightly lower than 100%, because some tax returns were impossible to audit due to special circumstances (individuals dying, disappearing, leaving the country, filing with substantial delay, etc.). The actual audit rates were 98.7% for employees and 92% for self-employed individuals. All of our estimates are based on the full 100% audit sample, so that we are measuring intent-to-treat effects rather than treatment effects. We prefer to present intent-to-treat effects rather than treatment effects (which would be obtained by running a two-stage least squares (2SLS) regression on the actual audit and using the intend-toaudit group as an instrument), because the impossibility of auditing some returns reflects relevant real-world limitations.
662
KLEVEN ET AL.
commuting deductions can be verified from information about the residence and work addresses). Other items required SKAT to request supporting documentation from the taxpayer, including self-reported deductions that cannot be double-checked in administrative registers and capital gains/losses from stock based on self-reported buying and selling prices. For some items such as taxable fringe benefits that are not third-party reported, SKAT would sometimes match self-reported income with the accounting books of the employer. Finally, in addition to these item-by-item verification procedures, SKAT compared disposable reported income to estimates of consumption and the change in wealth over the tax year, drawing on information from financial institutions, credit cards, and so forth. In the case of detected misreporting, the tax liability was corrected and a penalty possibly imposed depending on the nature of the error and as appropriate according to Danish law. Importantly, audited taxpayers were not told that the audits were part of a special study. The cost of implementing the experimental audits equaled 21% of SKAT’s total annual audit resources. Despite the large amount of resources spent on these audits, they are unlikely to uncover all tax evasion for all taxpayers and our results therefore provide lower bounds on total evasion.17 The same issue arises in the TCMP studies, which blow up detected tax evasion by a multiplier of 3.28 to arrive at the official U.S. tax evasion estimates. Unfortunately, this multiplier is large and has a very large measurement error, so that total evasion rates are at best rough approximations.18 In this study, we therefore focus solely on detectable tax evasion. The first stage of the experiment is used for two purposes. First, audit data for the 100% audit group are used to study the anatomy of compliance in the baseline. We also combine baseline audit data with quasi-experimental variation in marginal tax rates to study the effect of tax policy on compliance. Second, the random assignment of taxpayers to the 100% and 0% audit groups is used to estimate the causal effect of audits on future reporting behavior. In the second stage, individuals in both the 0% and 100% audit groups were randomly selected for pre-announced tax audits of tax returns filed in 2008 (for 2007 income). This part of the experiment was implemented only for the employees, since it was administratively infeasible for SKAT to include the selfemployed. The pre-announcements were made by official letters from SKAT sent to taxpayers 1 month prior to the filing deadline on May 1, 2008.19 A third 17 Income that is likely to go undetected include labor income from the informal economy, inkind exchanges among professionals, foreign income from jurisdictions with bank secrecy laws, and some fringe benefits not subject to third-party reporting. 18 The multiplier of 3.28 is based on a TCMP direct survey of taxpayers from 1976 (see Internal Revenue Service (1996) for details). Obviously, such self-reported levels of tax evasion are likely to be very noisy. 19 Recall that prepopulated returns are created around mid-March after which taxpayers can file their tax return. When the pre-announcement letters were delivered, 17% of those taxpayers
UNWILLING OR UNABLE TO CHEAT?
663
of the employees in each group received a letter telling them that their return would certainly be audited, another third received a letter telling them that half of everyone in their group would be audited, and the final third received no letter. The second stage therefore creates exogenous variation in the probability of being audited, conditional on having been audited in the first stage or not. The audit probability is 100% for the first group, 50% for the second group, and equal to the current perceived probability in the third group. The wording of the threat-of-audit letters was designed to make the message simple and salient. The wording of the 100% letter (50% letter, respectively) was the following: “As part of the effort to ensure a more effective and fair tax collection, SKAT has selected a group of taxpayers—including you—for a special investigation. For (half the) taxpayers in this group, the upcoming tax return for 2007 will be subject to a special tax audit after May 1, 2008. Hence, (there is a probability of 50% that) your return for 2007 will be closely investigated. If errors or omissions are found, you will be contacted by SKAT.” Both types of letters included an additional paragraph saying that “As always, you have the possibility of changing or adding items on your return until May 1, 2008. This possibility applies even if you have already made adjustments to your return at this point.” After returns had been filed in 2008, SKAT audited all taxpayers in the 100%-letter group and half of all taxpayers (selected randomly) in the 50%letter group. However, to save on resources, these audits were much less rigorous than the first round of audits in 2007. Hence, we do not show results from the actual audits in 2008, but focus instead on the variation in audit probabilities created by the letters. Let us brieflyconsider the possibility of spillover effects between treatments and controls. For several reasons, this is not likely to be a central issue here. First, there was no media coverage of the experiment and, therefore, no general public awareness about it. Second, audited taxpayers were not aware that the audits were part of an experiment; only letter recipients were aware of an experimental treatment. Third, information about income tax filing and auditing is strictly private, and hence spillovers can arise only if a treated individual voluntarily decides to reveal this information to others. This limits the issue primarily to close relatives such as spouses. Given a sample of 42,784 individuals spread across a country of about 5.5 million people, there are bound to be very few close family members in the sample. The potential importance of spillover effects within families can actually be checked by linking individuals in the sample to their spouses and cohabitating partners. We have carried out robustness checks where we drop all individuals in the sample whose partner is also in the sample (456 observations, or 0.107% of the sample). Dropping had already filed a new return. However, as explained in the previous section, taxpayers are allowed to change their returns all the way up to the deadline; only the final report is considered by tax examiners.
664
KLEVEN ET AL.
these observations has no impact on any of the empirical results.20 We therefore conclude that spillover effects are not a key concern for this experiment. 3.3. Data The data are obtained from SKAT’s Business Object Database, which contains all information available to SKAT for each taxpayer. This includes all income items from the third-party reports and the prepopulated, filed, and audited tax returns for each year and taxpayer. For the 2007 and 2008 filing seasons (2006 and 2007 incomes, respectively), we extract item-by-item income data from the third-party information reports (I), prepopulated return (P), filed return (F), and after-audit return (A). We also extract information about audit flags (described above) and historical audit adjustments. Finally, the data base contains a number of socioeconomic variables such as age, gender, marital status, church membership, home ownership, residence, and characteristics of the taxpayer’s employer (sector, number of employees). 4. THE ANATOMY OF TAX COMPLIANCE 4.1. Overall Compliance This section analyzes data from the baseline audits of tax returns filed in 2007 for incomes earned in 2006 in the 100% audit group. Table II presents audit statistics for total reported income in part A, and for third-party and selfreported income separately in part B. Starting with total net income and total tax liability in the top rows of the table, statistics are then presented by specific income categories in lower rows. For each income category, part A shows preaudit income (column 1), total audit adjustment (column 2), audit adjustment due to underreporting (column 3), and audit adjustment due to overreporting (column 4). Each column shows average amounts in Danish kroner as well as percent of tax filers with nonzero amounts; standard errors are displayed in parentheses. All statistics are calculated using population weights to reflect averages in the full population of tax filers in Denmark. Average net income before audits is 206,038 kroner (about $40,000), and average tax liability is 69,940 kroner, corresponding to an average tax rate of 34%. The most important income component is personal income, which includes earnings, transfers, pensions, and various adjustments.21 Personal income is reported by 95% of tax filers, and the average amount is close to total 20
The subsample where both spouses are present in the experiment is too small to reliably estimate spillovers. 21 See Table I for a detailed definition. In all tables, the personal income variable includes only earnings of employees, while earnings of the self-employed are reported separately as part of self-employment income.
TABLE II AUDIT ADJUSTMENTS DECOMPOSITIONa A. Total Income Reported
B. Third-Party vs. Self-Reported Income
Audit Adjustment
Underreporting
Overreporting
Third-Party Income
Third-Party Underreporting
SelfReported Income
Self-Reported Underreporting
1
2
3
4
5
6
7
8
206,038 (2159) 98.38 (0.09)
4532 (494) 10.74 (0.22)
4796 (493) 8.58 (0.20)
−264 (31) 2.16 (0.10)
195,969 (1798) 98.57 (0.08)
612 (77) 2.31 (0.11)
10,069 (1380) 38.18 (0.35)
4183 (486) 7.39 (0.19)
69,940 (1142) 90.76 (0.21)
1980 (236) 10.59 (0.22)
2071 (235) 8.41 (0.20)
−91 (11) 2.18 (0.10)
II. Positive and Negative Income Positive Amounts 243,984 income (2511) % Nonzero 98.24 (0.09)
3776 (485) 5.80 (0.17)
3943 (485) 4.78 (0.15)
−167 (27) 1.02 (0.07)
223,882 (1860) 98.15 (0.10)
516 (76) 1.60 (0.09)
20,102 (1693) 19.53 (0.28)
3427 (478) 3.41 (0.13)
−37,946 (1014) 79.09 (0.29)
756 (71) 6.45 (0.18)
853 (69) 5.13 (0.16)
−97 (14) 1.32 (0.08)
−27,913 (406) 78.21 (0.29)
97 (12) 0.75 (0.06)
−10,033 (862) 29.49 (0.33)
756 (68) 4.99 (0.16)
I. Net Income and Total Tax Net Amounts income % Nonzero Total tax
Amounts % Nonzero
Negative income
Amounts % Nonzero
UNWILLING OR UNABLE TO CHEAT?
Pre-Audit Income
(Continues)
665
666
TABLE II—Continued A. Total Income Reported
Capital income
Amounts % Nonzero
Deductions
Amounts % Nonzero
Pre-Audit Income
Audit Adjustment
Underreporting
Overreporting
Third-Party Income
Third-Party Underreporting
SelfReported Income
Self-Reported Underreporting
1
2
3
4
5
6
7
8
210,178 (1481) 95.22 (0.15)
2327 (399) 2.49 (0.11)
2398 (399) 1.99 (0.10)
−71 (11) 0.50 (0.05)
211,244 (1385) 95.20 (0.15)
463 (74) 1.30 (0.08)
−1066 (548) 11.95 (0.23)
1936 (392) 0.82 (0.06)
−11,075 (340) 93.93 (0.17)
254 (49) 2.10 (0.10)
286 (49) 1.69 (0.09)
−32 (6) 0.41 (0.05)
−14,556 (602) 94.91 (0.16)
98 (11) 0.79 (0.06)
3481 (542) 12.29 (0.23)
188 (47) 1.28 (0.08)
−9098 (104) 60.07 (0.35)
148 (17) 3.45 (0.13)
197 (15) 2.56 (0.11)
−49 (7) 0.89 (0.07)
−5666 (48) 57.61 (0.35)
18 (3) 0.31 (0.04)
−3432 (85) 22.60 (0.30)
179 (15) 2.49 (0.11) (Continues)
KLEVEN ET AL.
III. Income Components Personal Amounts income % Nonzero
B. Third-Party vs. Self-Reported Income
TABLE II—Continued A. Total Income Reported Audit Adjustment
Underreporting
Overreporting
Third-Party Income
Third-Party Underreporting
SelfReported Income
Self-Reported Underreporting
1
2
3
4
5
6
7
8
259 (45) 0.95 (0.07)
281 (45) 0.80 (0.06)
−22 (8) 0.15 (0.03)
3783 (976) 22.44 (0.30)
30 (12) 0.07 (0.02)
1852 (943) 2.45 (0.11)
251 (43) 0.75 (0.06)
Amounts
1544 (280) 3.43 (0.13)
1633 (279) 3.02 (0.12)
−89 (26) 0.41 (0.05)
1164 (177) 1.40 (0.08)
4 (2) 0.04 (0.01)
9234 (816) 7.66 (0.19)
1630 (279) 3.00 (0.12)
% Nonzero
10,398 (812) 7.63 (0.19)
a All amounts are in Danish kroner (U.S. $1 = 52 DKK as of 1/2010) and negative amounts (such as deductions) are reported in negative. Column 1 reports pre-audit amounts and the percent of filers with nonzero pre-audit amounts. Column 2 displays the net audit adjustment (and percent with nonzero net audit adjustment); column 3 displays underreporting in the audit adjustment defined as upward audit adjustments increasing tax liability (and percent with underreporting); column 4 displays overreporting in the audit adjustment defined as downward audit adjustments decreasing tax liability (and percent with overreporting). Note that columns 3 + 4 = 2. Column 5 displays third-party income (and percent with nonzero third-party income); column 6 displays third-party income underreporting defined as upward audit adjustments in the case where third party income is higher than final reported income for positive income items (and percent with third-party income underreporting); column 7 displays self-reported income defined as total reported income minus third-party reported income (and percent with nonzero self-reported income); column 8 displays self-reported income underreporting defined as all upward audit adjustments net of third-party income under-reporting (and percent wiht self-reported income under-reporting). Note that 5 + 7 = 1 and 6 + 8 = 3. Panel I reports net income (sum of all positive income components minus all negative income components and other deductions) and total tax. Panel II reports positive income (sum of all positive income components) and negative income (sum of all negative income components and deductions). Panel III displays various income components. Personal income is earnings, pensions, and alimony minus some retirement contributions. Capital income is interest income, returns on bonds, and net rents minus all interest payments. Deductions include work related expenses, union fees, charitable contributions, alimony paid, and various smaller items. Stock income includes dividends and realized capital gains on stocks. Self-employment income is net profits from unincorporated businesses. Net income is personal income, and capital income, stock income, and self-employment income minus deductions. All estimates are population weighted and based solely on the 100% audit group (19,680 observations). Standard errors are reported in parentheses.
UNWILLING OR UNABLE TO CHEAT?
Pre-Audit Income
III. Income Components (Continued) Stock Amounts 5635 income (1405) % Nonzero 22.47 (0.30) Selfemployment
B. Third-Party vs. Self-Reported Income
667
668
KLEVEN ET AL.
net income as the other components about cancel out on average. Capital income is negative on average mainly due to mortgage interest payments. It is equal to about −5% of total net income and is reported by 94% of tax filers.22 Deductions also represent about −5% of net income, but only 60% of tax filers claim deductions. Stock income constitutes less than 3% of net income and is reported by 22% of tax filers. Self-employment income is about 5% of net income and is reported by 8% of tax filers. Each income category is itself a sum of several line items on the tax return. A given line item is either always positive (such as interest income received) or always negative (such as mortgage interest payments). As we shall see, the distinction between positive line items and negative line items matters for separately measuring underreporting of third-party and self-reported income. We therefore split total net income into “positive income” and “negative income” which are defined as the sum totals of all the positive and negative items, respectively. Column 2 shows that the adjustment amounts are positive for all categories, implying that taxpayers do indeed evade taxes.23 These adjustments are strongly statistically significant in all cases. Total detectable tax evasion can be measured by the adjustment of net income and is equal to 4532 kroner (about $900), corresponding to about 2.2% of net income. The tax lost through detectable tax evasion is 1980 kroner or 2.8% of total tax liability.24 Considering the positive and negative income items separately, the evasion rate is 1.6% for positive income and 1.9% for negative income (in absolute value). Hence, overall tax evasion appears to be very small in Denmark despite the high marginal tax rates described in the previous section. However, the low evasion rates overall mask substantial heterogeneity across different income components, with evasion rates equal to 1.1% for personal income, 2.3% for capital income (in absolute value), 1.6% for deductions (in absolute value), 4.6% for stock income, and 14.9% for self-employment income. We explore the reasons for this heterogeneity below. We may also consider evasion rates measured by the share of taxpayers evading (i.e., percent in columns 2/1). The overall evasion rate measured by the 22 Nonzero capital income is extremely common as most taxpayers have either negative capital income from various loans or positive capital income from bank interest (most Danish bank accounts pay interest). 23 For negative items (such as mortgage interest payments included in capital income), a positive adjustment means that the absolute value of the mortgage interest payment was reduced. We use this convention so that upward adjustments always mean higher net income and hence a higher tax liability. 24 Estimated underreporting from the 1992 TCMP study for the U.S. individual income tax is 13.2% of total tax liability (Internal Revenue Service (1996)). However, as discussed above, this estimate is obtained by applying a multiplier of 3.28 to detected underreporting. Hence, detected evasion in the United States is about 4%, higher than the 2.8% we find for Denmark but not overwhelmingly so.
UNWILLING OR UNABLE TO CHEAT?
669
share of taxpayers having their net income adjusted is equal to 10.7%. For each income component separately, we have evasion rates of 2.6% for personal income, 2.2% for capital income, 5.7% of deductions, 4.2% for stock income, and 44.9% for self-employment income. These evasion rates are generally larger than for amounts, but follow the same qualitative pattern of heterogeneity. The audit adjustments discussed so far reflect a combination of upward adjustments (underreporting) and downward adjustments (overreporting), which are reported separately in columns 3 and 4. We see that underreporting takes place in all income categories, and that the detected underreporting is always strongly significant. The heterogeneity across income categories follows the same pattern as for the total adjustment. The amounts of overreporting are always small but still statistically significant. The small amount of overreporting most likely reflects honest mistakes resulting from a complex tax code and the associated transaction costs of filing a tax return correctly. 4.2. Self-Reported versus Third-Party Reported Income Each income category in Table II consists of some items that are selfreported and other items that are third-party reported. But the prevalence of information reporting varies substantially across income categories, with substantial third-party reporting for personal income at one end of the spectrum and very little third-party reporting for self-employment income at the other end. The results described above therefore suggest that evasion rates are higher when there is little third-party reporting, consistent with the findings of the TCMP studies in the United States. A key advantage of our data is that it allows an exact breakdown of income into third-party reported income and selfreported income for each income category and taxpayer, enabling a more rigorous analysis of the role of third-party reporting for tax compliance. We consider this breakdown in part B of Table II, which displays third-party income (column 5), underreporting of third-party income (column 6), self-reported income (column 7), and underreporting of self-reported income (column 8). Columns 5 and 7 show that the use of third-party reporting is very pervasive in Denmark. Third-party reporting covers 95% of total net income, while selfreporting is responsible for only 5%. The share of third-party reporting in positive income is 92% and its share in negative income is 74%. While the widespread use of information reporting indicates that detection probabilities are very high on average, there is considerable heterogeneity across income components. For personal income, third-party reporting corresponds to more than 100% of total income as self-reported income includes both positive and negative adjustments and is negative on average. Capital income reported by third parties is negative on average due to interest payments on debt, and is more than 100% of total negative capital income as self-reported capital income is positive. For the remaining components, the share of third-party reporting is 62% for deductions, 67% for stock income, and 11% for self-employment income. The fact that third-party reporting is not strictly zero for self-employed
670
KLEVEN ET AL.
individuals is useful, because it allows an exploration of the separate implications of information environment versus self-employment.25 We split total tax evasion into underreporting of self-reported income and underreporting of third-party reported income. As mentioned above, we observe line-by-line income amounts in the information report (I), the filed tax return (F), and the audit-adjusted return (A). Each report consists of line items that are either always positive (as in the case of earnings) or always negative (as in the case of deductions and losses). Consider first the always-positive line items. We can say that there is underreporting of third-party income if the individual reports less on the return than what is obtained from third-party reports and there is a subsequent upward audit adjustment. Formally, if we have F < A < I, then third-party cheating is equal to A−F. If we have F < I ≤ A, then third-party cheating is equal to I − F. In all other cases (i.e., if either A ≤ F < I or F ≥ I), third-party cheating is zero. Given this procedure, we measure underreporting of self-reported income as the residual difference between total underreporting and third-party underreporting. Consider next the always-negative line items such as losses and deductions. If the taxpayer reports larger losses or deductions (in absolute value) than what is obtained from third-party reports and is then denied part or all of those extra losses in the audit, this may reflect either self-reported losses that are unjustified or manipulation of third-party reported losses. Our prior methodology does not allow us to separate between the two. However, closer examination of the data shows that negative income items are either (a) exclusively thirdparty reported items with no self-reported component or (b) have a significant self-reported income component. For negative items (a), underreporting has to be of the third-party category. It is reasonable to assume, consistent with our theoretical model, that for items (b) with a significant self-reported income component, underreporting is always in the self-reported category (as detection probability is expected to be much lower for self-reported changes). We classify underreporting for negative items into self-reported and third-party components using this alternative methodology. We find a very strong variation in tax evasion depending on the information environment. For third-party reported income, the evasion rate is always extremely small: it is equal to 0.23% for total positive income, 0.35% for total negative income, and always below 1% across all the different categories. Interestingly, the evasion rate for self-employment income conditional on third-party reporting is only 0.33%, suggesting that overall tax evasion among the self-employed is large because of the information environment and not because of, for example, different preferences among those choosing selfemployment (such as attitudes toward risk and cheating). By contrast, tax eva25 An example of third-party reporting for self-employed individuals would be an independent contractor working for a firm (but not as a formal employee) which reports the contractor’s compensation directly to the government.
UNWILLING OR UNABLE TO CHEAT?
671
sion for self-reported income is substantial: the evasion rate is 17.1% for total positive income, 7.5% for total negative income, 5.4% for capital income, 13.6% for stock income, and 17.7% for self-employment income. The evasion rate for self-employment income is not particularly high compared to the other forms of income once we condition on self-reporting. For total self-reported net income, the tax evasion rate is equal to 41.6%. Because self-reported net income consists of positive amounts and negative amounts that just about cancel on average (self-reported net income is quite small), measuring tax evasion as a share of self-reported net income may give an exaggerated representation of the evasion rate. Note however that these estimates capture only detectable evasion and are therefore lower bounds on true evasion, particularly for selfreported income where traceable evidence is often limited. The model presented earlier predicts that each taxpayer substantially underdeclares self-reported income while fully declaring third-party income. We can think of this as a “within-person” prediction. The cross-sectional evidence on evasion rates for third-party and self-reported income is consistent with this within-person prediction, but could also reflect a pattern where those with mostly self-reported income are large evaders and underdeclare any type of income, whereas those with mostly third-party income are nonevaders. In this case, big evaders would display substantial evasion even for third-party income, while nonevaders would report truthfully even for self-reported income. To explore this alternative hypothesis, we first point out two pieces of evidence in Table II that go against it. First, the evidence for self-employment income discussed above shows that self-employed individuals are major evaders overall, but do not underdeclare third-party income. Second, the population shares shows that, among those who are found to evade taxes, only a small fraction underdeclare third-party income. Figure 3 provides direct within-person evidence. Panel A depicts the distribution of the ratio of evaded income to self-reported income among those who evade. Income is defined as the sum of all positive items, so that self-reported income is always positive. The large spike around a ratio of 1 shows that, among evaders, the most common strategy is to evade all self-reported income. The figure also shows that almost no taxpayers evade more than their self-reported income. Panel B plots the fraction of taxpayers who evade and the fraction of income evaded against the fraction of income that is self-reported. The fraction of income evaded is shown for both total (positive) income and third-party (positive) income. Three findings in the figure support the within-person prediction of the model. First, the probability of evading jumps up immediately once the taxpayer has some income that is self-reported (although it never exceeds 40%). Second, the share of total income evaded is increasing in the share of income that is self-reported, whereas the share of third-party income evaded is always very close to zero. This shows that taxpayers with more self-reported income evade more, but always declare third-party income fully. Third, the share of total income evaded is very close to the 45-degree line as long as self-
672
KLEVEN ET AL.
FIGURE 3.——Anatomy of tax evasion. Panel A displays the density of the ratio of evaded income to self-reported income (after audit adjustment) among those with a positive tax evasion, using the 100% audit group and population weights. Income is defined as the sum of all positive items (so that self-reported income is always positive). Panel A shows that, among evaders, the most common act is to evade all self-reported income. About 70% of taxpayers with positive self-reported income do not have any adjustment and are not represented on panel A. Panel B displays the fraction evading and the fraction evaded (conditional on evading) by deciles of fraction of income self-reported (after audit adjustment and adding as one category those with no self-reported income). Panel B also displays the fraction of third-party income evaded (unconditional). Income is defined as positive income. In both panels, the sample is limited to those with positive income above 38,500 kroner, the tax liability threshold (see Table I).
UNWILLING OR UNABLE TO CHEAT?
673
reported income is less than 20% of total income, and then starts to fall below the 45-degree line. This shows that those with relatively little self-reported income evade more as a share of self-reported income than those with relatively high self-reported income, which goes directly against the alternative hypothesis above. This finding is consistent with the model in Section 2 where we argued that taxpayers who have a large share of income in self-reported form cannot evade all their self-reported income, because total disposable income cannot fall too far below the sum of consumption and the change in wealth without triggering an investigation. Although information about consumption and wealth is not automatically third-party reported, it can be (partially) obtained from third parties at the discretion of tax authorities. To summarize these results, tax evasion is very low overall but substantial once we zoom in on purely self-reported income. This reflects an underlying pattern where each taxpayer fully declares third-party reported income (where detection probabilities are very high) and at the same time substantially underreports self-reported income (where detection probabilities are low). This is consistent with our model and suggests that overall tax compliance is high, not because taxpayers are unwilling to cheat, but because they are unable to cheat successfully due to the widespread use of third-party reporting. 4.3. Social versus Information Factors To explore the role of social, economic, and information factors in determining evasion, Table III reports the results of ordinary least squares (OLS) regressions of a dummy for underreporting net income on a number of dummy covariates, using the full-audit group and population weights. Part A (columns 1– 4) considers a basic set of explanatory variables, while part B (columns 5–8) considers a richer set of variables. Column 1 includes only social variables: gender, marital status, church membership, geographical location (dummy for living in the capital Copenhagen), and age (dummy for being older than 45). The table shows that being female, a church member, living in the capital, and older than 45 are negatively associated with evasion, while being married is positively associated with evasion. However, among these social variables, only gender is statistically significant. Column 2 adds three socioeconomic variables: home ownership, firm size (a dummy for working in a firm with less than 10 employees), and industrial sector (a dummy for working in the “informal sector” defined as agriculture, forestry, fishing, construction, and real estate).26 Being a homeowner, working in a small firm, and working in the informal sector are all positively and significantly associated with evasion. Column 3 considers information-related tax return factors, in particular the presence and size of self-reported income: a dummy for having nonzero self26 The informal sector classification is meant to capture industries that are generally prone to informal activities.
674
TABLE III PROBABILITY OF UNDERREPORTING: SOCIOECONOMICS VERSUS TAX RETURN FACTORSa Coefficients (in Percent)
A. Basic Variables
Constant Female dummy Married dummy
Geographical location Age
Copenhagen dummy Age > 45 dummy
Home ownership Firm size Industrial sector Self-reported income dummy Self-reported income > 20,000 DKK Self-reported income < −10,000 DKK
Firm size < 10 dummy Informal sector dummy
Social Factors
Tax Return Factors
All Factors
Social Factors
Socioeconomic Factors
Tax Return Factors
1
2
All Factors
3
4
5
6
7
8
1272 (106) −556 (063) 122 (070) −159 (098) −149 (152) −072 (067)
1013 (112) −417 (065) −055 (072) −227 (097) −001 (151) −063 (067) 549 (065) 507 (126) 437 (115)
118 (025)
372 (101) −206 (062) −150 (072) −094 (092) −025 (147) −056 (061) 015 (066) 347 (105) 027 (092) 559 (080) 2109 (140) 1474 (142)
555 (216) −333 (067) −198 (078) −188 (099) p-value 887 p-value 000 372 (073) p-value 000 p-value 000
095 (204)
224 (299) −102 (062) −170 (075) −071 (092) p-value 3353 p-value 2433 −088 (071) p-value 000 p-value 000 375 (078) 876 (161) 1424 (138)
558 (075) 2168 (138) 1499 (142)
695 (164) −529 (062) −072 (077) −154 (102) 6 location p-value dummies 686 4 age group p-value dummies 000 5 firm size dummies 22 industry dummies
349 (080) 979 (162) 1456 (141)
(Continues)
KLEVEN ET AL.
Member of church
B. Detailed Variables
Socioeconomic Factors
TABLE III—Continued Coefficients (in Percent)
A. Basic Variables
B. Detailed Variables
Social Factors
Socioeconomic Factors
Tax Return Factors
1
2
3
1322 (158)
Socioeconomic Factors
Tax Return Factors
All Factors
4
5
6
7
8
1307 (153)
Self-employed dummy Capital income dummy Stock income dummy Deduction dummy Audit adjustment in 2004 or 2005 dummy Income controls R-squares Adjusted R-squares
6 income group dummies 116% 114%
246% 242%
1615% 1614%
1653% 1648%
216% 211%
1226 1237 (161) (156) 1703 1347 (114) (139) −075 −047 (198) (187) 033 121 (065) (066) −112 −076 (072) (088) 722 686 (155) (158) p-value p-value 020 002 776% 1872% 1976% 758% 1866% 1954%
UNWILLING OR UNABLE TO CHEAT?
Auditing flag dummy
All Factors
Social Factors
a This table reports coefficients of the OLS regression of dummy for underreporting on various dummy regressors. All coefficients are expressed in percent and robust standard errors are reported. Bottom rows report the R-square and adjusted R-squares. All estimates are population weighted and based solely on the 100% audit group (19,680 observations). Standard errors reported in parentheses. In part A (columns 1–4), we include a basic set of dummy variables, while a richer set of variables is included in part B (columns 5–8). In part B, we do not report the full set of coefficients for geographical, age, firm size, industrial sector, and income groups. We instead only report the p-value from an F -test that the coefficients of those dummies are all equal to zero (for each category). The six location dummies are defined as Copenhagen, North Sealand, Middle and South Sealand, South Denmark, Middle Jutland, and North Jutland. The four age dummies are for age groups 0–25, 26–45, 46–65, and 66+. The five firm size dummies are for firms’ size: 1, 2–10, 11–100, 101–1000, and 1001+. The six income group dummies are for each of the bottom three quartiles separately, percentile 75–95, percentile 95–99, and top percentile. For income categories, self-employed dummy means nonzero self-employment income and so forth.
675
676
KLEVEN ET AL.
reported income, a dummy for having self-reported income above 20,000 kroner, and a dummy for having self-reported income below −10,000 kroner. We also include a dummy for having been flagged by the automated audit selection system (see Section 3), because audit flags are to a large extent a (complex) function of self-reported income. The results show very strong effects of all these information-related variables. Column 4 brings all the variables together so as to study their relative importance. The results show that by far the strongest predictors of evasion are the variables that capture self-reported income. The effect of firm size is also fairly strong, whereas the effect of the informal sector disappears.27 As for the social variables, their effects remain small, and all but female gender and marital status are statistically insignificant. Note that the coefficient on marital status actually changes sign. It is illuminating to consider the adjusted R-squares across the different specifications. The specification including only self-reported income variables explains about 16.1% of the variation, while the specification with only socioeconomic factors explains just about 2.5%. Adding socioeconomic variables to the specification with tax return variables has almost no effect on the Rsquares. This provides suggestive evidence that information, and specifically the presence and size of income that is difficult to trace, is the key aspect of the compliance decision. In part B, we investigate whether these findings are robust to including a much richer set of explanatory variables. Besides the basic variables described above, we include 6 location dummies (for the 6 main regions of Denmark), 4 age-group dummies, 5 firm-size dummies, 22 industry dummies, 6 incomegroup dummies, dummies for having nonzero income in different categories, and a dummy for having experienced an audit adjustment in the past 2 years. The conclusions are the same as above: the effects of social variables are small and mostly insignificant, whereas variables that capture information (presence and size of self-reported income, self-employment, audit flags, and prior audit adjustments) have very strong effects. This confirms the conclusion that information and traceability are central to the compliance decision. 5. THE EFFECT OF THE MARGINAL TAX RATE ON EVASION The effect of marginal tax rates on tax evasion is a central parameter for tax policy design. As discussed earlier, the effect of the marginal tax rate on tax evasion is theoretically ambiguous, not just because of income effects, but because the substitution effect can be either positive or negative, depending on the structure of penalties, taxes, and detection probabilities. In this section, we 27
The fact that firm size remains significant suggests that collusion between taxpayers and third parties may be important in small firms, a finding which is consistent with the theoretical results of Kleven, Kreiner, and Saez (2009).
UNWILLING OR UNABLE TO CHEAT?
677
sign the substitution effect by presenting evidence on the compensated elasticity of tax evasion with respect to the marginal tax rate. Earlier studies of this parameter have been based on U.S. TCMP data, and observational variation in marginal tax rates across taxpayers and over time (Clotfelter (1983), Feinstein (1991)). The results have been very sensitive to the empirical specification, due to the lack of exogenous variation in tax rates. We therefore follow a different approach using quasi-experimental variation created by the discontinuity in marginal tax rates around large and salient kinks in the Danish tax schedule. As described in Section 3 and Table I, the Danish tax system consists of two separate piecewise linear schedules: a three-bracket income tax and a twobracket stock income tax. The most significant kinks are created by the topbracket threshold in the income tax (where the marginal tax jumps from 49% to 62%) and the bracket threshold in the stock income tax (where the marginal tax jump from 28% to 43%). Economic theory predicts that taxpayers will respond to such jumps in marginal tax rates by bunching at the kink points. Saez (2010) showed that such bunching can be used to identify the compensated elasticity of reported income with respect to the net-of-tax rate. This strategy was pursued on Danish data by Chetty, Friedman, Olsen, and Pistaferri (2009), who found evidence of substantial bunching around the top kink in the income tax system. We also consider the top kink in the income tax, focusing on individuals with self-employment income where evasion is substantial and a significant response is therefore more likely. Moreover, we consider the kink in the stock income tax, since this kink is also large and much of stock income is self-reported and therefore prone to evasion. Our key contribution to the existing literature is that the combination of pre-audit and post-audit data allows us to separately identify elasticities of illegal evasion and legal avoidance, as opposed to only the overall elasticity of reported income. Figure 4 plots empirical distributions of taxable income (excluding stock income) in panel A and stock income in panel B around the major cutoffs in the income tax and stock income tax schedules. Panel A shows the distributions of pre-audit taxable income (solid curve) and post-audit taxable income (dashed curve) for the self-employed in 2006 around the top kink at 318,700 kroner (vertical line). The figure groups individuals into 3000 kroner bins and plots the number of taxpayers in each bin. Like Chetty et al. (2009), we find substantial bunching in pre-audit incomes around the kink, with almost five times as many taxpayers in the bin including the kink as in the surrounding bins. This provides clear evidence of an overall taxable income response to taxation, which may reflect evasion, avoidance, or real responses. To uncover the evasion response to marginal tax rates, we turn to the distribution of post-audit income. Here we continue to see bunching, but less than for pre-audit income. This suggests that bunching is achieved partly by underdeclaring income, which is consistent with
678
KLEVEN ET AL.
an evasion response to the marginal tax rate. The post-audit bunching reflects real and avoidance responses purged of the (detectable) evasion response.28 As shown in panel B, we find even stronger evidence of bunching around the kink point in the stock income tax schedule (at 88,600 kroner), with about 10 times as many taxpayers in the bin around the kink as in the surrounding bins. However, we see essentially no difference between the pre-audit and post-audit distributions, suggesting that the bunching effect reflects solely avoidance and not (detectable) evasion. Table IV uses the bunching evidence to estimate elasticities of tax evasion and tax avoidance for self-employment income (panel A) and stock income (panel B). The first row in each panel shows the fraction of individuals bunching (defined as having an income within 1500 kroner of the kink) among individuals within 40,000 kroner of the kink. The second row in each panel shows compensated elasticities based on comparing the actual distribution to a counterfactual distribution estimated by excluding observations in a band around the kink (Saez (2010)). The difference between the actual and counterfactual distributions gives an estimate of excess mass around the kink point, which can be compared to the size of the jump in the net-of-tax rate so as to infer the elasticity. The identifying assumption is that in the absence of the discontinuous jump in tax rates, there would have been no spike in the density distribution at the kink. The estimated elasticity of pre-audit taxable income for the self-employed is equal to 0.16, while the elasticity of post-audit taxable income equals 0.085. The difference between the two is the compensated evasion elasticity with respect to the net-of-tax rate and is equal to 0.076. All of these estimates are strongly significant. For stock income, the pre-audit elasticity is 2.24 and strongly significant, while the post-audit elasticity is equal to 2.00. This implies an elasticity of evasion equal to 0.25, but this elasticity is not statistically significant. The last column of the table explores the robustness to the bandwidth around the kink used to estimate the elasticities. We find that the estimates are not very sensitive to bandwidth, which is because the bunching in the Danish tax data is very sharp. To summarize these results, the marginal tax rate has at most a small positive substitution effect on tax evasion for individuals with substantial self-reported income. Estimated evasion responses are smaller than avoidance responses, although this decomposition could be biased by the presence of undetected evasion that the method attributes to avoidance. The combination of large evasion rates for self-reported income (as documented in the previous section) and small evasion effects of the marginal tax rate is not incompatible with the 28
The post-audit bunching is a lower bound on real and avoidance responses, because individuals who respond to tax rates both along the avoidance/real margin, and the evasion margin, and bunch at the kink point (before audits) will be displaced from the kink by the audit. Hence, the difference in bunching between pre-audit and post-audit incomes is an upper bound on the evasion response to marginal tax rates.
UNWILLING OR UNABLE TO CHEAT?
679
FIGURE 4.——Density distributions around kink points. The figure displays number of taxpayers (by 3000 DKK bins) for taxable income for the self-employed (panel A) and stock income (panel B). In both panels, we report the series for incomes before audits and incomes after audits for the 100% audit group. The vertical line denotes the kink point where marginal tax rates jump. The jump is from 49% to 62% in panel A (top taxable income bracket) and from 28% to 43% in panel B (top stock income bracket). For married filers, the stock income tax is assessed jointly, and the bracket threshold in the figure is the one that applies to such joint filers. For single filers, the bracket threshold is half as large at 44,300 kroner. We have aligned single and married filers in the figure by multiplying the stock income of singles by 2.
model in Section 2. Importantly, the combined results of this and the previous section suggest that information reporting is much more important than low marginal tax rates to achieve enforcement.
680
KLEVEN ET AL. TABLE IV TAX EVASION VERSUS TAX AVOIDANCE ELASTICITIESa Baseline
Differences
Before Audit Income (Avoidance + Evasion Elasticities)
After Audit Income (Avoidance Elasticity Only)
Difference (Evasion Elasticity Only)
Robustness Check: Difference Using Smaller Sample Around Kink
1
2
3
4
A. Self-Employment Income (MTR jump from 49% to 62% at 318,700 DKK) Fraction bunching (percent) 19.12 12.56 6.56 (0.90) (0.76) (1.18) Elasticity 0.161 0.085 0.076 (0.011) (0.008) (0.014) Number of observations 1919 1887 3806
9.57 (1.86) 0.070 (0.014) 2255
B. Stock Income (MTR jump from 28% to 43% at 88,600 DKK) Fraction bunching (percent) 39.30 36.42 (2.22) (2.11) Elasticity 2.243 1.996 (0.213) (0.191) Number of observations 486 519
1.80 (3.69) 0.120 (0.259) 737
2.88 (3.06) 0.247 (0.286) 1005
a This table estimates the effects of marginal tax rates on tax evasion versus tax avoidance using bunching evidence around kink points of the tax schedule where marginal tax rates jump. Panel A focuses on the self-employed and the top rate kink where the marginal tax rates jump from 49% to 62% at 318,700 DKK. Panel B focuses on stock income and the top rate kink for stock income where the marginal tax rates jump from 28% to 43% at 88,600 DKK for married filers and 44,300 DKK for single filers (we have aligned single filers by multiplying by 2 their stock income). As shown in Figure 2, in both cases, there is significant evidence of bunching at the kink both for income before audits and incomes after audits. In each panel, the first row estimates the fraction of tax filers bunching (income within 1500 DKK of the kink) among tax filers with income within 40,000 DKK of the kink. Column 1 is for income before audit while column 2 is for income after audit. Column 3 reports the difference between column 1 and column 2. Column 4 presents a robustness check on the difference when the sample is limited to tax filers within 20,000 DKK (instead of 40,000 DKK) of the kink. In each panel, the second row estimates the (compensated) elasticity of reported income with respect to the net-of-tax rate using bunching evidence (following the method developed in Saez (2010)). Column 1 is the elasticity for before audit income while column 2 is the elasticity for after audit income. Column 3 reports the difference between column 2 and column 1. Column 4 presents as a robustness check the difference in elasticities when the sample is limited to tax filers within 20,000 DKK (instead of 40,000 DKK) of the kink. The elasticity of before audit income combines both the evasion and avoidance elasticities while the elasticity of after audit income is the tax avoidance elasticity. Therefore, the difference in elasticities is the compensated elasticity of tax evasion with respect to the net-of-tax rate.
6. THE EFFECTS OF TAX ENFORCEMENT ON EVASION 6.1. Randomization Test In this section, we consider the effects of audits and threat-of-audit letters on subsequent reporting. We start by running a randomization test to verify that the treatment and control groups are indeed ex ante identical in both experiments. Appendix Table A.I in the shows the results of the audit randomization (0% vs. 100% audit group) in part A, letter randomization (letter vs. no-letter group) in part B, and within-letter randomization (50% vs. 100% letter group)
UNWILLING OR UNABLE TO CHEAT?
681
in part C. The table shows mean income and percent of taxpayers with nonzero income in different categories, the percent filing a return the following year in 2008, and a number of socioeconomic characteristics. Unlike the baseline compliance study, statistics are not reported using population weights to match the full Danish population, but reflect instead the composition in the stratified random sample on which the experiments are based. We use sample weights as this increases slightly the power of our results. For the audit randomization, income statistics are based on the tax returns filed in 2007, that is, right before the baseline audits were implemented. We see that the differences between the 0% and 100% audit groups are always very small and never statistically significant at the 5% level, showing that the randomization was indeed successful. Importantly, the fraction filing returns the following year in 2008 is also statistically identical across the two groups (97.08% and 96.94%, respectively). We have also verified that conditional on filing a 2008 return, there are no statistically significant differences across the 0% and 100% audit groups. This absence of selective attrition is critical as our analysis of prior audits effects is based on 2008 returns. For the letter and within-letter randomizations, statistics are based on the prepopulated tax returns in 2008, that is, right before the letter experiment was implemented.29 Among the 39 differences we show, only two (capital income and fraction married in the letter vs. no-letter groups) are borderline significant at the 5% level. Because we are looking at so many statistics, it is not surprising that a small fraction (actually 2/39 = 51%) is (borderline) significant at the 5% level. Hence, we conclude that the letter randomization was also successful. 6.2. The Effect of Audits on Future Reporting Let us first consider the effect of audits on future reporting in the context of the economic model in Section 2. In that model, reported income depends on the perceived probability of detection when engaging in tax evasion. Because audits are rare events for a taxpayer, they are likely to provide new information and therefore lead to a change in the perceived detection probability. We may think of the detection probability as a product of two probabilities: the probability of audit and the probability of detection conditional on audit. Audits may have an effect through both channels. One would expect the effect 29 More precisely, the statistics are based on the last version of the return before the letters were sent out. As the letters were distributed shortly after the prepopulated returns were created, the last return for most taxpayers was indeed the prepopulated return. However, a small fraction of taxpayers (about 17%) had already made self-reported adjustments to their returns in the short time window between prepopulated returns and letters (recall that taxpayers can repeatedly correct their returns at any time before the May 1st deadline). To minimize noise, we consider the effect of letters on adjustments to the latest return for each taxpayer at the time of receiving the letter, and hence the randomization test is based on this tax return concept.
682
KLEVEN ET AL.
on the perceived audit probability to be positive. The effect on the perceived probability of detection conditional on audit is ambiguous, because the taxpayer may learn that the tax administration is either more or less effective at uncovering evasion than expected. In practice, however, audited taxpayers are contacted only if tax inspectors upon examining the return believe that hidden income or unjustified deductions can potentially be uncovered. Hence, taxpayers are typically only aware of being audited when tax inspectors are successful. This means that the probability of detection conditional on audit is likely to increase as a result of experiencing an audit. Therefore, the model predicts an increase in reported income. In particular, self-reported income should increase, but not third-party reported income where the detection probability is already close to 1. The few previous studies of the effect of audits on future reporting have not found significant results. These studies have considered either TCMP audits (Long and Schwartz (1987)) or ordinary audits (Erard (1992)). The problem with TCMP audits is that taxpayers are aware that selection is random and that the audit is part of a special study. The problem with using ordinary audits is that selection is endogenous and it is very difficult to control for the ensuing selection bias in a convincing way. Our data contain more compelling variation based on randomized audit treatments where participants are not aware of the randomization. As the experimental audits were implemented on tax returns filed in 2007, we estimate the effects of audits on subsequent reporting by comparing changes in filed income from 2007 to 2008 (income earned in 2006 and 2007, respectively) in the 0% and 100% audit groups. Table V shows the results for the full sample in panel A and the sample limited to those receiving no threat-of-audit letter in panel B.30 Each panel shows amounts of income change at the top and the probability of income increase at the bottom. Income changes have been trimmed at −200,000 and +200,000 kroner to get rid of extreme observations that make estimates imprecise. This trimming affects less than 2% of the observations on average. To provide a benchmark, column 1 shows actual detected evasion in the baseline audits, that is, the average amount of detected underreporting at the top of each panel and the fraction of taxpayers found underreporting at the bottom. Actual detected evasion can be seen as the mechanical effect of a tax audit, whereas the effect on subsequent income reporting is the behavioral (deterrence) effect of the change in perceived detection probability. We show the estimated deterrence effect of audits on total reported income in column 2, self-reported income in column 3, and third-party reported in30 The threat-of-audit letter treatment (analyzed in the next section) is orthogonal to the audit treatment, and both panels therefore show causal effects of audits. But the full sample may produce different results than the no-letter sample either because of cross-effects between the two treatments or because the no-letter sample contains a higher share of self-employed individuals as the letter experiment excluded the self-employed.
TABLE V EFFECTS OF RANDOMIZED PRIOR AUDITS ON YEAR TO YEAR INCOME CHANGESa
1
2
A. Full Sample A1. Amounts [difference between the 100% and the 0% audit groups] Net income 8491 2557 (827) (787) Total tax 3295 1375 (257) (464)
3
4
5
2331 (658)
225 (691)
0.301 (0.098) 0.417 (0.144)
A2. Probability of audit adjustment and income increase [difference between the 100% and the 0% audit groups] Net income 19.09 0.89 2.11 0.24 (0.28) (0.48) (0.48) (0.48) Total tax 19.17 0.99 (0.28) (0.49) Number of observations
41,571
41,571
B. Sample Limited to Those Receiving No Threat-of-Audit Letter B1. Amounts [difference between the 100% and the 0% audit groups] Net income 12,835 2904 (1310) (1117) Total tax 5019 1732 (406) (677)
IV Effect of Audit Adjustment on Income Change
0.047 (0.025) 0.052 (0.025)
41,571
41,571
41,571
3086 (1008)
−182 (962)
0.226 (0.091) 0.345 (0.137)
UNWILLING OR UNABLE TO CHEAT?
Baseline Audit Adjustment
Change in Reported Income (Panels A1 and B1) and Probability of Income Increase (Panels A2 and B2) from 2006 to 2007 Third-Party Reported Total Income Self-Reported Income Income
(Continues)
683
684
TABLE V—Continued
Baseline Audit Adjustment 1
Change in Reported Income (Panels A1 and B1) and Probability of Income Increase (Panels A2 and B2) from 2006 to 2007 Third-Party Reported Total Income Self-Reported Income Income 2
3
4
IV Effect of Audit Adjustment on Income Change 5
0.028 (0.024) 0.038 (0.024)
Number of observations
26,180
26,180
26,180
26,180
26,180
a This table reports the effects of prior audits on income changes from 2006 to 2007. Panels A1 and B1 focus on the amounts of income changes while panels A2 and B2 focus
on the probability of a (nominal) income increase. In all cases, we report the differences between the 100% audit group and the 0% audit group in the base year. Column 1 reports the difference between the 100% audit group and the 0% audit group in the average amount of audit adjustment in the base year (panels A1 and B1) and the fraction with an audit adjustment for underreporting in base year (panels A2 and B2). Column 2 reports the difference between the 100% audit group and the 0% audit group in the average income increase from 2006 to 2007 (panels A1 and B1) and the fraction with a nominal income increase from 2006 to 2007 (panels A2 and B2). Column 3 repeats the analysis of column 2 but limited to self-reported income instead of total reported income. Column 4 repeats the analysis of column 2 but limited to third-party reported income instead of total reported income. Note that col. 2 = col. 3 + col. 4 for amounts in panels A1 and B1. Column 5 presents the coefficient of an IV regression of income change (panels A1 and B1) and dummy for an income increase (panels A2 and B2) on the baseline audit adjustment for underreporting using as instrument the 100% audit group dummy. Effectively, we have col. 5 = col. 2/col. 1. This coefficient in panels A1 and B1 can be interpreted as the causal effect of an additional dollar of audit adjustment on reported income the following year assuming that audits which did not lead to any audit adjustment did not have any causal impact on reported income the following year. In each panel, we report effects for net income and for total tax liability. Estimates are weighted according to the experiment stratification design. Weights do not reflect population weights. Standard errors are reported in parentheses. For panels A1 and B1, all the amounts are in Danish kroner (U.S. $1 = 52 DKK as of 1/2010). Income changes are trimmed at −200,000 DKK and 200,000 DKK. That is, income changes are defined as min(200,000, max(income in 2007−income in 2006,−200,000)). This is done to avoid extreme outcomes, which make estimates very imprecise. Less than 2% of observations are trimmed on average.
KLEVEN ET AL.
B. Sample Limited to Those Receiving No Threat-of-Audit Letter (Continued) B2. Probability of audit adjustment and income increase [difference between the 100% and the 0% audit groups] Net income 25.75 0.73 2.12 −0.52 (0.39) (0.61) (0.61) (0.61) Total tax 25.93 0.98 (0.39) (0.61)
UNWILLING OR UNABLE TO CHEAT?
685
come in column 4. Column 5 shows the ratio of column 2 to column 1 obtained as an instrumental variable (IV) regression of the income change (amount and income-increase dummy, respectively) on the baseline audit adjustment (amount and upward-adjustment dummy, respectively), using as an instrument the 100% audit group dummy. For amounts, this can be interpreted as the causal effect of an additional dollar of audit adjustment on total reported income the following year, assuming that audits that do not lead to any adjustment have no behavioral effect. For probabilities, it gives the causal effect of experiencing an upward audit adjustment on the probability of increasing reported income. Table V shows that audits have a positive deterrence effect on tax evasion. For the full sample, the effect on total net income is 2557 kroner or 30.1 cents per additional kroner of audit adjustment. The effect on tax liability is 1375 kroner, corresponding to 41.7 cents per dollar of audit adjustment. These estimates are strongly significant. The effects on the probabilities of increasing total income and tax liability are qualitatively similar, but these estimates are only marginally significant at the 5% level. We find that experiencing an audit adjustment raises the probabilities of increasing reported income and tax liability the following year by about 1 percentage point or 5% of the baseline probability. According to the model in Section 2, the deterrence effects should be driven entirely by self-reported income as there is no room for additional deterrence for third-party reported income. The breakdown of the total estimated effect into the separate effects on self-reported income and third-party income confirms this prediction. For third-party reported income, the estimated effects are close to zero and statistically insignificant. For self-reported income, the effect on the reported amount equals 2331 kroner or 91% of the total effect. The effect on the probability of increasing self-reported income is 2.1 percentage point, more than twice as large as the total effect, and this estimate is now strongly significant. Considering the no-letter sample in panel B, we find that the qualitative effects are the same as for the full sample. Moreover, the quantitative magnitudes do not change by much; in fact, the estimated deterrence effects for the no-letter sample are not significantly different from the full-sample estimates at the 5% level. To conclude, the overall deterrence effect of audits is positive but quite modest. The effect of audits on total net income corresponds to only about 1% of income. But this effect is driven entirely by purely self-reported income and constitutes a substantial fraction of self-reported income. Hence, when the information environment is such that taxpayers are able to cheat, they display substantial underreporting (Section 4) and respond to increased enforcement by substantially reducing underreporting (this section).31 The overall deter31 The size of the audit effect on self-reported income can be gauged by comparing it to the effect of the marginal tax rate. We can do this for the self-employed for whom we estimate the
686
KLEVEN ET AL.
rence effect of increased enforcement is therefore modest because of the widespread use of third-party information reporting where detection probabilities are close to 1 initially. These results are consistent with the economic model in Section 2. 6.3. The Effect of Threat-of-Audit Letters We now turn to the effect of the threat-of-audit letters, which provide exogenous variation in the probability of audit. As described above, the letters announce audit probabilities of either 50% or 100% to randomly selected taxpayers in the full-audit and no-audit groups. When interpreting the results, it is important to keep in mind that the probability of audit is not the same as the probability of detection, the parameter that ultimately determines tax compliance according to theory. The variation in the audit probability creates variation in the detection probability, with the size of the variation depending on the probability of detection conditional on audit. This conditional detection probability is unobservable, but is likely to be small for self-reported income where tax inspectors have little hard information to guide them. Hence, while the audit probabilities in the letter experiment are very high, the detection probabilities are much more modest and the magnitude of the estimates should be seen in this light. To studythe effects of the threat-of-audit letters, we consider the sample of employees (as the letter randomization did not include self-employed individuals) who filed tax returns in both 2007 and 2008, and had an address on record so that they could be reached by post. Because taxpayers received the threatof-audit letters shortly after receiving the prepopulated return (P event) and about 1 month prior to the filing deadline (F event) in 2008, we focus on the effect of letters on the difference between the P and F events in 2008 (for incomes earned in 2007). These are self-reported adjustments to the prepopulated return (see Section 6.1 above for exact details). As this prepopulated return includes all third-party information available to the government, the estimates should be interpreted as effects on self-reported income. Table VI shows results for amounts of income adjustment in panel A and probabilities of income adjustment in panel B. To reduce noise from extreme elasticity of evasion with respect to the marginal tax rate. For this subsample, average income is 298,200 kroner and the audit effect on next-year income is 4083 kroner (obtained as in Table V, conditioning on self-employment). The evasion elasticity with respect to the marginal tax rate equals 0.076 (Table IV) and the average marginal tax rate for the self-employed is 45%. Denoting the elasticity by e, we have log(z + z)/z = e · log(1 − t − t)/(1 − t), where z is income and t is the marginal tax rate. Using the numbers z = 298200, z = 4083, e = 0076, and t = 45%, the formula implies t = −108%. That is, it takes a 10.8 percentage-point cut in the marginal tax rate (on a given taxpayer in a given year) to reduce evasion by as much as one prior-year audit of that taxpayer. This shows that prior-audit effects on self-reported income are very large compared to the tax-rate effect.
TABLE VI THREAT-OF-AUDIT LETTER EFFECTS ON INDIVIDUAL UPWARD ADJUSTMENTS TO REPORTED INCOMEa No Letter Group Both 0% and 100% Audit Groups
1
Both 0% and 100% Audit Groups
0% Audit Group Only
Both 0% and 100% Audit Groups
100% Audit Group Only
Any Upward Downward Any Upward Downward Any Upward Downward Adjustment Adjustment Adjustment Adjustment Adjustment Adjustment Adjustment Adjustment Adjustment 2
3
Upward Adjustment
Upward Adjustment
4
5
6
7
8
9
10
11
12
A. Average Amounts of Individual Upward Adjustments Net income −497 94 84 (31) (42) (22) Total tax −322 67 50 (24) (32) (18) Number of obs. 9397 24,788 24,788
10 (34) 17 (26) 24,788
74 (55) 57 (43) 14,145
77 (29) 46 (24) 14,145
−3 (45) 11 (34) 14,145
115 (64) 77 (49) 10,643
92 (35) 54 (28) 10,643
23 (52) 23 (39) 10,643
58 (26) 32 (21) 24,788
52 (26) 36 (21) 24,788
B. Probability of Upward Adjustments (in percent) Net income 13.37 1.63 (0.35) (0.47) Total tax 13.69 1.52 (0.35) (0.48) Number of obs. 9397 24,788
0.07 (0.40) −0.05 (0.40) 24,788
2.29 (0.62) 2.03 (0.63) 14,145
1.52 (0.37) 1.65 (0.37) 14,145
0.76 (0.53) 0.37 (0.54) 14,145
0.98 (0.73) 1.02 (0.73) 10,643
1.60 (0.44) 1.49 (0.44) 10,643
−0.62 (0.61) −0.47 (0.61) 10,643
1.10 (0.33) 1.03 (0.33) 24,788
0.93 (0.33) 1.07 (0.33) 24,788
1.56 (0.28) 1.57 (0.29) 24,788
687
a The table reports the effects of threat-of-audit letters on individual adjustments to reported income from the time the letter is received in March to the final May 1st deadline for the tax return filing. Panel A focuses on the average amounts of adjustment. To reduce noise due to extreme observations, all amounts are capped at 10,000 DKK. The cap affects about 1.65 percent of observations for net-income adjustments and 0.75 percent of observations for total tax adjustments (due to net-income adjustments). Panel B focuses on the probability of making an adjustment to net-income or total tax (expressed in percent). Column 1 reports average adjustments (panel A) and probability of adjustment (panel B) among those taxpayers who did not receive the letter. Column 2 reports the difference in average adjustments (panel A) and probability of adjustment (panel B) between the letter and no letter groups. Column 3 reports the difference in upward adjustments, while column 4 reports the difference in downward adjustments col. 3 + col. 4 = col. 2. Columns 5, 6, and 7 repeat cols. 2, 3, and 4 but limit the sample to those not audited in the base year (0% audit group). Columns 8, 9, and 10 repeat cols. 2, 3, and 4 but limit the sample to those audited in base year (100% audit group). Column 11 reports the difference in adjustments between the letter group with 50% audit probability and the no-letter group. Column 12 reports the difference in adjustments between the letter group with 100% audit probability and the letter group with 50% audit probability. In each panel, we report effects for net income and for total tax liability. The sample includes only tax filers who did not have any self-employment income in the base year (as tax filers with self-employment income were not part of the letter experiment). Estimates are weighted according to the experiment stratification design. Weights do not reflect population weights. Standard errors are reported in parentheses.
UNWILLING OR UNABLE TO CHEAT?
Baseline
50% Letter − 100% Letter − No Letter 50% Letter
Differences Letter Group vs. No-Letter Group
688
KLEVEN ET AL.
observations, adjustment amounts have been capped at 10,000 kroner, which affects less than 2% of observations. The first column in the table establishes a baseline by showing the amounts and probabilities of self-reported adjustments to the prepopulated return among those who did not receive a letter. Columns 2–4 then show the effect of receiving any letter (50% or 100% letter) for the full sample of employees (including both the 0% and 100% audit groups). Column 2 displays the effect on total adjustments, while columns 3 and 4 split the total effect into upward and downward adjustments. As an adjustment is either upward or downward, column 2 is the sum of columns 3 and 4. The following three findings emerge. First, there is a positive effect of letters on the amounts and probabilities of self-reported adjustments to income and tax liability. For total net income, the amount goes up significantly by 94 kroner as a result of receiving a letter. As the baseline adjustment is −497 kroner, the letter effect corresponds to an increase of 19% of the initial adjustment in absolute value. The probability of adjustment increases by 1.63 percentage points from a base of 13.37%, corresponding to an increase of 12.2%, and this estimate is strongly significant. The effects are roughly similar for total tax paid. Second, the effect of letters on adjustments reflects almost exclusively upward adjustments, and the effect on upward adjustments is always strongly significant. This is of course consistent with the economic model in Section 2: letters increase the perceived probability of detection and therefore deter taxpayers from underreporting. Third, the effect of letters on downward adjustment is always close to zero and never statistically significant. The following columns split the sample by 100% audit and 0% audit in the baseline year. This allows us to explore the presence of cross-effects between the letter and audit treatments. The broad conclusion from these estimates is that letter effects are roughly the same in the 0% and 100% audit groups. In particular, the effects on upward adjustments are almost exactly the same in the two groups. For downward adjustments, the effects on amounts are close to zero and insignificant in both groups. The effects on probabilities of downward adjustment display larger differences between the two groups, but are always statistically insignificant. Hence, there does not appear to be important crosseffects between the two treatments. Finally, columns 11 and 12 explore the differential impact of 50% and 100% letters. Column 11 shows the difference in upward adjustments between the 50%-letter and the no-letter groups, while column 12 shows the difference in upward adjustments between the 100%-letter and 50%-letter groups. We see a significant difference in the effects of the two types of letters, and the direction of the difference is consistent with the economic model in Section 2. For both amounts and probabilities, the differential impact of the 100% letter over the 50% letter tends to be roughly similar to the impact of the 50% letter over no letter, implying that a 100% audit probability has about twice the effect of a 50% audit probability.
UNWILLING OR UNABLE TO CHEAT?
689
We may summarize the results in this section as follows. Consistent with the model in Section 2, audit threats have a significant positive effect on selfreported income, and the effect of 100% audit threats is significantly larger than the effect of 50% audit threats. However, the quantitative magnitudes of the letter effects are modest compared to the effects of actual audits in the previous section, which suggests that audit-threat letters create less variation in the perceived probability of detection than actual audit experiences. A key difference between the two treatments is that audit-threat letters change the probability of audit without affecting the probability of detection conditional on audit, whereas actual audits are likely to raise the probability of detection conditional on audit as discussed earlier. If conditional detection probabilities are low for self-reported income, threat-of-audit letters will have a relatively small effect. An additional possibility is that taxpayers pay less attention to letter threats than to actual audit experiences. For these reasons, analyzing actual audits may be a more powerful way to understand the deterrence effect of enforcement than sending out letters. 7. CONCLUSION The economics literature on tax evasion follows on the seminal work of Allingham and Sandmo (1972), who considered a situation where a taxpayer decides how much income to self-report when facing a probability of detection and a penalty for cheating. Microsimulations as well as laboratory experiments show that, at realistic levels of detection probabilities and penalties, an AS-type setting predicts much less compliance than we observe in practice, at least in developed countries. This suggests that the AS model misses important aspects of the real-world reporting environment, and a number of different generalizations have been proposed and analyzed in the literature. In particular, several authors have argued that observed compliance levels can only be explained by accounting for psychological or cultural aspects of the reporting decision. While we do not deny the importance of psychological and cultural aspects in the decision to evade taxes, the evidence presented in this paper points to a more classic information story. In particular, we show that the key distinction in the taxpayer’s reporting decision is whether income is subject to third-party reporting or if it is solely self-reported. Augmenting the AS model with thirdparty reporting can account for most of our empirical findings. For self-reported income, our empirical results fit remarkably well with the basic AS model: tax evasion is substantial and responds negatively to an increase in the perceived probability of detection coming from either a prior audit or a threat-of-audit letter. Interestingly, evidence from bunching at kink points shows that the elasticity of tax evasion with respect to the marginal tax rate is very low, which suggests that rigorous tax enforcement is a much more effective tool to combat evasion than cutting marginal tax rates. For third-party reported income, tax evasion is extremely modest and does not respond to the perceived probability of detection, because this probability
690
TABLE A.I RANDOMIZATION CHECKS: AUDIT AND LETTER EXPERIMENTSa A. Audit Randomization B. Letter Randomization 100% Difference Difference 0% Audit Audit Difference Standard No-Letter Letter Difference Standard Error Group Group Col. 6 − Col. 5 Error Group Group 100% − 0% 1 2 3 4 5 6 7 8
50% Letter Group 9
C. Within Letter Randomization 100% Difference Letter Difference Standard Group Col. 10 − Col. 9 Error 10 11 12
263,485 100,460 217,426 −12,805 −11,976 15,880 54,960
−1724 −508 1007 323 −138 −2261 −656
(6047) (3010) (2351) (1015) (160) (4928) (2869)
239,936 82,443 257,022 −16,554 −8333 7371 430
244,477 84,230 259,748 −15,485 −8304 8220 299
4541 1786 2725 1068 29 849 −131
(3425) (1588) (2904) (534) (160) (1777) (209)
243,878 84,022 259,374 −15,613 −8268 7857 527
245,078 84,438 260,123 −15,358 −8341 8584 70
1200 415 749 255 −73 727 −457
(4422) (2073) (3730) (626) (193) (2243) (268)
% with net income % with total tax % with personal income % with capital income % with deductions % with stock income % with self-employment
99.55 96.71 94.98 95.67 71.69 40.30 40.18
99.52 96.61 94.85 95.40 71.76 40.23 40.37
−0.03 −0.11 −0.13 −0.27 0.07 −0.07 0.19
(0.07) (0.17) (0.21) (0.20) (0.44) (0.47) (0.47)
98.73 96.64 97.29 97.02 64.18 44.07 0.78
98.64 96.26 97.11 96.90 64.49 43.63 0.79
−0.09 −0.38 −0.18 −0.12 0.31 −0.44 0.01
(0.15) (0.25) (0.22) (0.23) (0.65) (0.67) (0.12)
98.52 96.26 96.99 96.77 64.79 43.59 0.77
98.76 96.25 97.23 97.03 64.19 43.68 0.82
0.24 −0.02 0.25 0.26 −0.60 0.09 0.05
(0.19) (0.31) (0.27) (0.28) (0.77) (0.80) (0.14)
Female (%) Married (%) Church membership (%) Copenhagen (%) Age % filing in 2007
39.93 58.46 85.83 3.14 49.28 97.08
39.59 58.13 85.71 3.13 49.43 96.94
−0.33 −0.32 −0.12 −0.01 0.14 −0.14
(0.47) (0.48) (0.34) (0.17) (0.16) (0.16)
49.80 54.54 86.82 3.17 49.09 100.00
50.10 53.22 86.86 3.33 48.90 100.00
0.30 −1.32 0.04 0.16 −0.19 0.00
(0.67) (0.67) (0.46) (0.24) (0.25) (0.00)
49.83 53.79 87.06 3.32 49.01 100.00
50.38 52.65 86.66 3.34 48.80 100.00
0.55 −1.13 −0.40 0.02 −0.21 0.00
(0.81) (0.80) (0.54) (0.29) (0.30) (0.00)
Number of observations
23,148
19,630
42,778
9397
15,391
24,788
7706
7685
15,391
a This table presents randomization checks for the audit experiment (part A, columns 1–4) and the letter experiment (part B, columns 5–8 and part C, columns 9–12). Part A
compares baseline reported incomes in 2006 (before the audit experiment took place). Columns 1 and 2 present the baseline averages for the treatment group and control group, respectively. Column 3 presents the difference between the treatment group and the control group. The standard error of the difference is presented in column 4. Parts B and C compare prepopulated tax returns for 2007 incomes before the letters are sent. The columns in parts B and C are constructed as in part A. In part B, the sample is restricted to tax filers not registered as self-employed in the base year as the letter experiment could not be carried out for self-employed. In part C, the sample is further restricted to tax filers who received either the 50% threat-of-audit letter or the 100% threat-of-audit letter. Estimates are weighted according to the experiment stratification design. Weights do not reflect population weights. All the amounts are in Danish kroner (U.S. $1 = 52 DKK as of 1/2010).
KLEVEN ET AL.
265,209 100,968 216,418 −13,127 −11,839 18,141 55,616
Net income Total tax Personal income Capital income Deductions Stock income Self-employment
UNWILLING OR UNABLE TO CHEAT?
691
is already very high. This shows that third-party reporting is a very effective enforcement device. Given that audits are very costly and eliminate only a part of tax evasion, enforcement resources may be better spent on expanding thirdparty reporting than on audits of self-reported income.32 This also suggests that more work is needed to build a tax enforcement theory that centers on thirdparty reporting by firms, as recently explored by Kleven, Kreiner, and Saez (2009). REFERENCES ALLINGHAM, M. G., AND A. SANDMO (1972): “Income Tax Evasion: A Theoretical Analysis,” Journal of Public Economics, 1, 323–338. [651,655,689] ANDREONI, J., B. ERARD, AND J. FEINSTEIN (1998): “Tax Compliance,” Journal of Economic Literature, 36, 818–860. [651,652,654] CHETTY, R., J. FRIEDMAN, T. OLSEN, AND L. PISTAFERRI (2009): “Adjustment Costs, Firm Responses, and Labor Supply Elasticities: Evidence From Danish Tax Records,” Working Paper 15617, NBER. [677] CLOTFELTER, C. T. (1983): “Tax Evasion and Tax Rates: An Analysis of Individual Returns,” Review of Economics and Statistics, 65, 363–373. [677] ERARD, B. (1992): “The Influence of Tax Audits on Reporting Behavior,” in Why People Pay Taxes: Tax Compliance and Enforcement, ed. by J. Slemrod. Ann Arbor: University of Michigan Press. [682] FEINSTEIN, J. (1991): “An Econometric Analysis of Income Tax Evasion and Its Detection,” Rand Journal of Economics, 22, 14–35. [677] INTERNAL REVENUE SERVICE (1996): “Federal Tax Compliance Research: Individual Income Tax Gap Estimates for 1985, 1988, and 1992,” IRS Publication 1415 (Rev. 4–96), Washington, DC. [652,662,668] (2006): “Updated Estimates of the Tax Year 2001 Individual Income Tax Underreporting Gap. Overview,” Washington, DC. [652] KLEVEN, H., M. KNUDSEN, C. KREINER, S. PEDERSEN, AND E. SAEZ (2010): “Unwilling or Unable to Cheat? Evidence From a Randomized Tax Audit Experiment in Denmark,” Working Paper 15769, NBER. [654] KLEVEN, H., C. KREINER, AND E. SAEZ (2009): “Why Can Modern Governments Tax So Much? An Agency Model of Firms as Fiscal Intermediaries,” Working Paper 15218, NBER. [657,676, 691] LONG, S., AND R. SCHWARTZ (1987): “The Impact of IRS Audits on Taxpayer Compliance: A Field Experiment in Specific Deterrence,” in Annual Law and Society Association Meeting 1987, Washington, DC. [682] SAEZ, E. (2010): “Do Taxpayers Bunch at Kink Points?” American Economic Journal: Economic Policy, 2, 180–212. [653,677,678,680] SANDMO, A. (2005): “The Theory of Tax Evasion: A Retrospective View,” National Tax Journal, 58, 643–663. [652,655] SLEMROD, J. (2007): “Cheating Ourselves: The Economics of Tax Evasion,” Journal of Economic Perspectives, 21, 25–48. [652] 32 Indeed, two expansions of third-party reporting have been scheduled in Denmark (partly as a consequence of this study). One is the implementation of full third-party reporting of buying and selling prices for stock. The other is an expansion of third-party reporting of fringe benefits to employees.
692
KLEVEN ET AL.
SLEMROD, J., AND S. YITZHAKI (2002): “Tax Avoidance, Evasion and Administration,” in Handbook of Public Economics III, ed. by A. J. Auerbach and M. Feldstein. Amsterdam: Elsevier. [654,655] SLEMROD, J., M. BLUMENTHAL, AND C. CHRISTIAN (2001): “Taxpayer Response to an Increased Probability of Audit: Evidence From a Controlled Experiment in Minnesota,” Journal of Public Economics, 79, 455–483. [654] YITZHAKI, S. (1987): “On the Excess Burden of Tax Evasion,” Public Finance Quarterly, 15, 123–137. [655,656]
Dept. of Economics & STICERD, London School of Economics, Houghton Street, London WC2A 2AE, United Kingdom;
[email protected], Danish Inland Revenue, Nicolai Eigtveds Gade 28, Copenhagen K 1402, Denmark;
[email protected], Institute of Economics, University of Copenhagen, Studiestraede 6, DK-1455 Copenhagen K, Denmark;
[email protected], Danish Inland Revenue, Nicolai Eigtveds Gade 28, Copenhagen K 1402, Denmark;
[email protected], and Dept. of Economics, University of California, Berkeley, 549 Evans Hall #3880, Berkeley, CA 94720, U.S.A.;
[email protected]. Manuscript received February, 2010; final revision received October, 2010.
Econometrica, Vol. 79, No. 3 (May, 2011), 693–732
THE EFFECTS OF HEALTH INSURANCE AND SELF-INSURANCE ON RETIREMENT BEHAVIOR BY ERIC FRENCH AND JOHN BAILEY JONES1 This paper provides an empirical analysis of the effects of employer-provided health insurance, Medicare, and Social Security on retirement behavior. Using data from the Health and Retirement Study, we estimate a dynamic programming model of retirement that accounts for both saving and uncertain medical expenses. Our results suggest that Medicare is important for understanding retirement behavior, and that uncertainty and saving are both important for understanding the labor supply responses to Medicare. Half the value placed by a typical worker on his employer-provided health insurance is the value of reduced medical expense risk. Raising the Medicare eligibility age from 65 to 67 leads individuals to work an additional 0.074 years over ages 60–69. In comparison, eliminating 2 years worth of Social Security benefits increases years of work by 0.076 years. KEYWORDS: Retirement behavior, saving, health insurance, Medicare.
1. INTRODUCTION ONE OF THE LARGEST SOCIAL PROGRAMS for the rapidly growing elderly population is Medicare. In 2009, Medicare had 46.3 million beneficiaries and $509 billion of expenditures, making it comparable to Social Security.2 Prior to receiving Medicare at age 65, many individuals receive health insurance only if they continue to work. This work incentive disappears at age 65, when Medicare provides health insurance to almost everyone. An important question, therefore, is whether Medicare significantly affects the labor supply of the elderly. This question is crucial when considering Medicare reforms; the fiscal effects of such reforms depend on how labor supply responds. However, there is relatively little research on the labor supply responses to Medicare. This paper provides an empirical analysis of the effect of employer-provided health insurance and Medicare in determining retirement behavior. Using 1 We thank Joe Altonji, Peter Arcidiacono, Gadi Barlevy, David Blau, John Bound, Chris Carroll, Mariacristina De Nardi, Tim Erikson, Hanming Fang, Donna Gilleskie, Lars Hansen, John Kennan, Spencer Krane, Hamp Lankford, Guy Laroque, John Rust, Dan Sullivan, Chris Taber, the editors and referees, students of Econ 751 at Wisconsin, and participants at numerous seminars for helpful comments. We received advice on the HRS pension data from Gary Englehardt and Tom Steinmeier, and excellent research assistance from Kate Anderson, Olesya Baker, Diwakar Choubey, Phil Doctor, Ken Housinger, Kirti Kamboj, Tina Lam, Kenley Peltzer, and Santadarshan Sadhu. The research reported herein was supported by the Center for Retirement Research at Boston College (CRR) and the Michigan Retirement Research Center (MRRC) pursuant to grants from the U.S. Social Security Administration (SSA) funded as part of the Retirement Research Consortium. The opinions and conclusions are solely those of the authors, and should not be construed as representing the opinions or policy of the SSA or any agency of the Federal Government, the CRR, the MRRC, or the Federal Reserve System. 2 Figures taken from 2010 Medicare Annual Report (Boards of Trustees of the Federal Hospital Insurance and Federal Supplementary Medical Insurance Trust Funds (2010)).
© 2011 The Econometric Society
DOI: 10.3982/ECTA7560
694
E. FRENCH AND J. B. JONES
data from the Health and Retirement Study (HRS), we estimate a dynamic programming model of retirement that accounts for both saving and uncertain medical expenses. Our results suggest that Medicare is important for understanding retirement behavior, because it insures against medical expense shocks that can exhaust a household’s savings. Our work builds upon, and in part reconciles, several earlier studies. Assuming that individuals value health insurance at the cost paid by employers, Lumsdaine, Stock, and Wise (1994) and Gustman and Steinmeier (1994) found that health insurance has a small effect on retirement behavior. One possible reason for their results is that they found that the average employer contribution to health insurance is modest, and declines by only a small amount after age 65. If workers are risk-averse, however, and if health insurance allows them to smooth consumption when facing volatile medical expenses, they could value employer-provided health insurance well beyond the cost paid by employers. Medicare’s age-65 work disincentive thus comes not only from the reduction in average medical costs paid by those without employer-provided health insurance, but also from the reduction in the volatility of those costs. Addressing this point, Rust and Phelan (1997) and Blau and Gilleskie (2006, 2008) estimated dynamic programming models that account explicitly for risk aversion and uncertainty about out-of-pocket medical expenses. Their estimated labor supply responses to health insurance are larger than those found in studies that omit medical expense risk. Rust and Phelan and Blau and Gilleskie, however, assumed that an individual’s consumption equals his income net of out-of-pocket medical expenses. In other words, they ignored an individual’s ability to smooth consumption through saving. If individuals can self-insure against medical expense shocks by saving, prohibiting saving will overstate the consumption volatility caused by medical cost volatility. It is therefore likely that Rust and Phelan and Blau and Gilleskie overstated the value of health insurance, and thus the effect of health insurance on retirement. In this paper we construct a life-cycle model of labor supply that not only accounts for medical expense uncertainty and health insurance, but also has a saving decision. Moreover, we include the coverage provided by means-tested social insurance to account for the fact that Medicaid provides a substitute for other forms of health insurance. To our knowledge, ours is the first study of its kind. While van der Klaauw and Wolpin (2008) and Casanova (2010) also estimated retirement models that account for both savings and uncertain medical expenses, they did not focus on the role of health insurance, and thus use much simpler models of medical expenses. Almost everyone becomes eligible for Medicare at age 65. However, the Social Security system and pensions also provide retirement incentives at age 65. This makes it difficult to determine whether the high job exit rates observed at age 65 are due to Medicare, Social Security, or pensions. One way we address this problem is to exploit variation in employer-provided health insurance. Some individuals receive employer-provided health insurance only while
EFFECTS OF INSURANCE ON RETIREMENT
695
they work, so that their coverage is tied to their job. Other individuals have retiree coverage, and receive employer-provided health insurance even if they retire. If workers value access to health insurance, those with retiree coverage should be more willing to retire before age 65. Our data show that individuals with retiree coverage tend to retire about 12 year earlier than individuals with tied coverage. This suggests that employer-provided health insurance is a determinant of retirement. One problem with using employer-provided health insurance to identify Medicare’s effect on retirement is that individuals may choose to work for a firm because of its postretirement benefits. The fact that early retirement is common for individuals with retiree coverage may not reflect the effect of health insurance on retirement. Instead, individuals with preferences for early retirement may be self-selecting into jobs that provide retiree coverage. To address this issue, we measure self-selection into jobs with different health insurance plans. We allow the value of leisure and the time discount factor to vary across individuals. Modelling preference heterogeneity with the approach used by Keane and Wolpin (2007), we find that individuals with strong preferences for leisure are more likely to work for firms that provide retiree health insurance. However, self-selection does not affect our main results. Estimating the model by the method of simulated moments, we find that the model fits the data well with reasonable parameter values. Next, we simulate the labor supply response to changing some of the Medicare and Social Security retirement program rules. Raising the Medicare eligibility age from 65 to 67 would increase years worked by 0.074 years. Eliminating 2 years worth of Social Security benefits would increase years worked by 0.076 years. Thus, even after allowing for both saving and self-selection into health insurance plans, the effect of Medicare on labor supply is as large as the effect of Social Security. One reason why we find that Medicare is important is that we find that medical expense risk is important. Even when we allow individuals to save, they value the consumption smoothing benefits of health insurance. We find that about half the value a typical worker places on his employer-provided health insurance comes from these benefits. The rest of paper proceeds as follows. Section 2 develops our dynamic programming model of retirement behavior. Section 3 describes how we estimate the model using the method of simulated moments. Section 4 describes the HRS data that we use in our analysis. Section 5 presents life-cycle profiles drawn from these data. Section 6 contains preference parameter estimates for the structural model, and an assessment of the model’s performance, both within and outside of the estimation sample. In Section 7, we conduct several policy experiments. In Section 8 we consider a few robustness checks. Section 9 concludes. A set of supplemental appendices comprises the Supplemental Material (French and Jones (2011)) and provides details of our methodology and data, along with additional results.
696
E. FRENCH AND J. B. JONES
2. THE MODEL To capture the richness of retirement incentives, our model is very complex and has many parameters. Appendix A provides definitions for all the variables. 2.1. Preferences and Demographics Consider a household head seeking to maximize his expected discounted (where the subjective discount factor is β) lifetime utility at age t, t = 59 60 94. Each period that he lives, the individual derives utility from consumption, Ct , and hours of leisure, Lt . The within-period utility function is of the form (1)
U(Ct Lt ) =
1 (C γ L1−γ )1−ν 1−ν t t
We allow both β and γ to vary across individuals. Individuals with higher values of β are more patient, while individuals with higher values of γ place less weight on leisure. The quantity of leisure is (2)
Lt = L − Nt − φPt Pt − φRE REt − φH Ht
where L is the individual’s total annual time endowment. Participation in the labor force is denoted by Pt , a 0–1 indicator equal to 1 when hours worked, Nt , are positive. The fixed cost of work, φPt , is treated as a loss of leisure. Including fixed costs helps us capture the empirical regularity that annual hours of work are clustered around 2,000 hours and 0 hours (Cogan (1981)). Following a number of studies,3 we allow preferences for leisure, in our case the value of φPt , to increase linearly with age. Workers who leave the labor force can reenter; reentry is denoted by the 0–1 indicator REt = 1{Pt = 1 and Pt−1 = 0}, and individuals who reenter the labor market incur cost φRE . The quantity of leisure also depends on an individual’s health status through the 0–1 indicator Ht = 1{healtht = bad}, which equals 1 when his health is bad. Workers alive at age t survive to age t + 1 with probability st+1 . Following De Nardi (2004), workers who die value bequests of assets, At , according to the function (3)
b(At ) = θB
(At + κ)(1−ν)γ 1−ν
The survival probability st , along with the transition probabilities for the health variable Ht , depend on age and previous health status. 3 Examples include Rust and Phelan (1997), Blau and Gilleskie (2006, 2008), Gustman and Steinmeier (2005), Rust, Buchinsky, and Benitez-Silva (2003), and van der Klaauw and Wolpin (2008).
EFFECTS OF INSURANCE ON RETIREMENT
697
2.2. Budget Constraints The individual holds three forms of wealth: assets (including housing); pensions; and Social Security. He has several sources of income: asset income, rAt , where r denotes the constant pre-tax interest rate; labor income, Wt Nt , where Wt denotes wages; spousal income, yst ; pension benefits, pbt ; Social Security benefits, sst ; and government transfers, trt . The asset accumulation equation is (4)
At+1 = At + Yt + sst + trt − Mt − Ct
where Mt denotes medical expenses. Post-tax income, Yt = Y (rAt + Wt Nt + yst + pbt τ), is a function of taxable income and the vector τ, described in the Supplemental Material Appendix B, that captures the tax structure. Individuals face the borrowing constraint (5)
At + Yt + sst + trt − Ct ≥ 0
Because it is illegal to borrow against future Social Security benefits and difficult to borrow against many forms of future pension benefits, individuals with low nonpension, non-Social Security wealth may not be able to finance their retirement before their Social Security benefits become available at age 62 (Kahn (1988), Rust and Phelan (1997), Gustman and Steinmeier (2005)).4 Following Hubbard, Skinner, and Zeldes (1994, 1995), government transfers provide a consumption floor: (6)
trt = max{0 Cmin − (At + Yt + sst )}
Equation (6) implies that government transfers bridge the gap between an individual’s “liquid resources” (the quantity in the inner parentheses) and the consumption floor. Treating Cmin as a sustenance level, we further require that Ct ≥ Cmin . Our treatment of government transfers implies that individuals will always consume at least Cmin , even if their out-of-pocket medical expenses exceed their financial resources. 2.3. Medical Expenses, Health Insurance, and Medicare We define Mt as the sum of all out-of-pocket medical expenses, including insurance premia and expenses covered by the consumption floor. We assume that an individual’s medical expenses depend on five components. First, 4 We assume time-t medical expenses are realized after time-t labor decisions have been made. We view this as preferable to the alternative assumption that the time-t medical expense shocks are fully known when workers decide whether to hold on to their employer-provided health insurance. Given the borrowing constraint and timing of medical expenses, an individual who has extremely high medical expenses this year could have negative net worth next year. Because many people in our data have unresolved medical expenses, medical expense debt seems reasonable.
698
E. FRENCH AND J. B. JONES
medical expenses depend on the individual’s employer-provided health insurance, It . Second, they depend on whether the person is working, Pt , because workers who leave their job often pay a larger fraction of their insurance premiums. Third, they depend on the individual’s self-reported health status, Ht . Fourth, medical expenses depend on age. At age 65, individuals become eligible for Medicare, which is a close substitute for employer-provided coverage.5 Offsetting this, as people age their health declines (in a way not captured by Ht ), raising medical expenses. Finally, medical expenses depend on the person-specific component ψt , yielding (7)
ln Mt = m(Ht It t Pt ) + σ(Ht It t Pt ) × ψt
Note that health insurance affects both the expectation of medical expenses, through m(·), and the variance, through σ(·) Even after controlling for health status, French and Jones (2004a) found that medical expenses are very volatile and persistent. Thus we model the personspecific component of medical expenses, ψt , as (8)
ψt = ζt + ξt
ξt ∼ N(0 σξ2 )
(9)
ζt = ρm ζt−1 + εt
εt ∼ N(0 σε2 )
where ξt and εt are serially and mutually independent; ξt is the transitory component, while ζt is the persistent component, with autocorrelation ρm . We assume that medical expenditures are exogenous. It is not clear ex ante whether this causes us to understate or overstate the importance of health insurance. On the one hand, individuals who have health insurance receive better care. Our model does not capture this benefit, and in this respect understates the value of health insurance. Conversely, treating medical expenses as exogenous ignores the ability of workers to offset medical shocks by adjusting their expenditures on medical care. This leads us to overstate the consumption risk facing uninsured workers, and thus the value of health insurance. Evidence from other structural analyses suggests that our assumption of exogeneity leads us to overstate the effect of health insurance on retirement.6 5 Individuals who have paid into the Medicare system for at least 10 years become eligible at age 65. A more detailed description of the Medicare eligibility rules is available at http://www. medicare.gov/. 6 To our knowledge, Blau and Gilleskie (2006) is the only estimated, structural retirement study to have endogenous medical expenditures. Although Blau and Gilleskie (2006) did not discuss how their results would change if medical expenses were treated as exogenous, they found that even with several mechanisms (such as prescription drug benefits) omitted, health insurance has “a modest impact on employment behavior among older males.” De Nardi, French, and Jones (2010) studied the saving behavior of retirees. They found that the effects of reducing meanstested social insurance are smaller when medical care is endogenous, rather than exogenous. They also found, however, that even when medical expenditures are a choice variable, they are a major reason why the elderly save.
EFFECTS OF INSURANCE ON RETIREMENT
699
Differences in labor supply behavior across health insurance categories are an integral part of identifying our model. We assume that there are three mutually exclusive categories of health insurance coverage. The first is retiree coverage, where workers keep their health insurance even after leaving their jobs. The second category is tied health insurance, where workers receive employerprovided coverage as long as they continue to work. If a worker with tied health insurance leaves his job, he can keep his health insurance coverage for that year. This is meant to proxy for the fact that most firms must provide COBRA health insurance to workers after they leave their job. After 1 year of tied coverage and not working, the individual’s insurance ceases.7 The third category consists of individuals whose potential employers provide no health insurance at all, or none. Workers move between these insurance categories according to ⎧ ⎨ retiree if It−1 = retiree It = tied (10) if It−1 = tied and Nt−1 > 0 ⎩ none if It−1 = none or (It−1 = tied and Nt−1 = 0) 2.4. Wages and Spousal Income We assume that the logarithm of wages at time t, ln Wt , is a function of health status (Ht ), age (t), hours worked (Nt ), and an autoregressive component, ωt : (11)
ln Wt = W (Ht t) + α ln Nt + ωt
The inclusion of hours, Nt , in the wage determination equation captures the empirical regularity that, all else equal, part-time workers earn relatively lower wages than full time workers. French (2005) and Erosa, Fuster, and Kambourov (2010) used similar frameworks. The autoregressive component ωt has the correlation coefficient ρW and the normally distributed innovation ηt : (12)
ωt = ρW ωt−1 + ηt
ηt ∼ N(0 ση2 )
Because spousal income can serve as insurance against medical shocks, we include it in the model. In the interest of computational simplicity, we assume that spousal income is a deterministic function of an individual’s age and health status: (13)
yst = ys(Ht t)
7 Although there is some variability across states as to how long individuals are eligible for employer-provided health insurance coverage, by Federal law most individuals are covered for 18 months (Gruber and Madrian (1995)). Given a model period of 1 year, we approximate the 18-month period as 1 year. We do not model the option to take up COBRA, assuming that the take-up rate is 100%. Although the actual take-up rate is around 23 (Gruber and Madrian (1996)), we simulated the model by assuming that the rate was 0%, so that individuals transitioned from tied to none as soon as they stopped working, and found very similar labor supply patterns. Thus assuming a 100% take-up rate does not seem to drive our results.
700
E. FRENCH AND J. B. JONES
2.5. Social Security and Pensions Because pensions and Social Security generate potentially important retirement incentives, we model the two programs in detail. Individuals receive no Social Security benefits until they apply. Individuals can first apply for benefits at age 62. Upon applying, the individual receives benefits until death. The individual’s Social Security benefits depend on his average indexed monthly earnings (AIME), which is roughly his average income during his 35 highest earnings years in the labor market. The Social Security system provides three major retirement incentives.8 First, while income earned by workers with less than 35 years of earnings automatically increases their AIME, income earned by workers with more than 35 years of earnings increases their AIME only if it exceeds earnings in some previous year of work. Because Social Security benefits increase in AIME, this causes work incentives to drop after 35 years in the labor market. We describe the computation of AIME in more detail in the Supplemental Material Appendix C. Second, the age at which the individual applies for Social Security affects the level of benefits. For every year before age 65 the individual applies for benefits, benefits are reduced by 6.67% of the age-65 level. This is roughly actuarially fair. But for every year after age 65 that benefit application is delayed, benefits rise by 5.5% up until age 70. This is less than actuarially fair, and encourages people to apply for benefits by age 65. Third, the Social Security Earnings Test taxes labor income of beneficiaries at a high rate. For individuals aged 62–64, each dollar of labor income above the “test” threshold of $9,120 leads to a 1/2 dollar decrease in Social Security benefits, until all benefits have been taxed away. For individuals aged 65–69 before 2000, each dollar of labor income above a threshold of $14,500 leads to a 1/3 dollar decrease in Social Security benefits, until all benefits have been taxed away. Although benefits taxed away by the earnings test are credited to future benefits, after age 64 the crediting rate is less than actuarially fair, so that the Social Security Earnings Test effectively taxes the labor income of beneficiaries aged 65–69.9 When combined with the aforementioned incentives to draw Social Security benefits by age 65, the Earnings Test discourages work after age 65. In 2000, the Social Security Earnings Test was abolished for those 65 and older. Because those born in 1933 (the average birth year in our sample) 8 A description of the Social Security rules can be found in recent editions of the Green Book (Committee on Ways and Means). Some of the rules, such as the benefit adjustment formula, depend on an individual’s year of birth. Because we fit our model to a group of individuals who on average were born in 1933, we use the benefit formula for that birth year. 9 The credit rates are based on the benefit adjustment formula. If a year’s worth of benefits are taxed away between ages 62 and 64, benefits in the future are increased by 6.67%. If a year’s worth of benefits are taxed away between ages 65 and 66, benefits in the future are increased by 5.5%.
EFFECTS OF INSURANCE ON RETIREMENT
701
turned 67 in 2000, we assume that the earnings test was repealed at age 67. These incentives are incorporated in the calculation of sst , which is defined to be net of the earnings test. Pension benefits, pbt , are a function of the worker’s age and pension wealth. Pension wealth (the present value of pension benefits) in turn depends on pension accruals. We assume that pension accruals are a function of a worker’s age, labor income, and health insurance type, using a formula estimated from confidential HRS pension data. The data show that pension accrual rates differ greatly across health insurance categories; accounting for these differences is essential in isolating the effects of employer-provided health insurance. When finding an individual’s decision rules, we assume further that the individual’s existing pension wealth is a function of his Social Security wealth, age, and health insurance type. Details of our pension model are described in Section 4.3 and Supplemental Material Appendix D. 2.6. Recursive Formulation In addition to choosing hours and consumption, eligible individuals decide whether to apply for Social Security benefits; let the indicator variable Bt ∈ {0 1} equal 1 if an individual has applied. In recursive form, the individual’s problem can be written as (14)
Vt (Xt ) = max
Ct Nt Bt
1 (C γ (L − Nt − φPt Pt 1−ν t − φRE REt − φH Ht )1−γ )1−ν + β(1 − st+1 )b(At+1 ) + βst+1 Vt+1 (Xt+1 ) dF(Xt+1 |Xt t Ct Nt Bt )
subject to equations (5) and (6). The vector Xt = (At Bt−1 Ht AIMEt It Pt−1 ωt ζt−1 ) contains the individual’s state variables, while the function F(·|·) gives the conditional distribution of these state variables, using equations (4) and (7)–(13).10 The solution to the individual’s problem consists of the consumption rules, work rules, and benefit application rules that solve equation (14). These decision rules are found numerically using value function iteration. Supplemental Material Appendix E describes our numerical methodology. 10 Spousal income and pension benefits (see Supplemental Material Appendix D) depend only on the other state variables and are thus not state variables themselves.
702
E. FRENCH AND J. B. JONES
3. ESTIMATION To estimate the model, we adopt a two-step strategy, similar to the one used by Gourinchas and Parker (2002), French (2005), and Laibson, Repetto, and Tobacman (2007). In the first step, we estimate or calibrate parameters that can be cleanly identified without explicitly using our model. For example, we estimate mortality rates and health transitions straight from demographic data. In the second step, we estimate the preference parameters of the model, along with the consumption floor, using the method of simulated moments (MSM).11 3.1. Moment Conditions The objective of MSM estimation is to find the preference vector that yields simulated life-cycle decision profiles that “best match” (as measured by a GMM criterion function) the profiles from the data. The following moment conditions comprise our estimator: (i) Because an individual’s ability to self-insure against medical expense shocks depends on his asset level, we match 1/3 and 2/3 asset quantiles by age. We match these quantiles in each of T periods (ages), for a total of 2T moment conditions. (ii) We match job exit rates by age for each health insurance category. With three health insurance categories (none, retiree, and tied), this generates 3T moment conditions. (iii) Because the value a worker places on employer-provided health insurance may depend on his wealth, we match labor force participation conditional on the combination of asset quantile and health insurance status. With two quantiles (generating three quantile-conditional means) and three health insurance types, this generates 9T moment conditions. (iv) To help identify preference heterogeneity, we utilize a series of questions in the HRS that ask workers about their preferences for work. We combine the answers to these questions into a time-invariant index, pref ∈ {high low out}, which is described in greater detail in Section 4.4. Matching participation conditional on each value of this index generates another 3T moment conditions. (v) Finally, we match hours of work and participation conditional on our binary health indicator. This generates 4T moment conditions. Combined, the five preceding items result in 21T moment conditions. Supplemental Material Appendix F provides a detailed description of the moment conditions, the mechanics of our MSM estimator, the asymptotic distribution of our parameter estimates, and our choice of weighting matrix. 11 An early application of the MSM to a structural retirement model is Berkovec and Stern (1991).
EFFECTS OF INSURANCE ON RETIREMENT
703
3.2. Initial Conditions and Preference Heterogeneity A key part of our estimation strategy is to compare the behavior of individuals with different forms of employer-provided health insurance. If access to health insurance is an important factor in the retirement decision, we should find that individuals with tied coverage retire later than those with retiree coverage. In making such a comparison, however, we must account for the possibility that individuals with different health insurance options differ systematically along other dimensions as well. For example, individuals with retiree coverage tend to have higher wages and more generous pensions. We control for this “initial conditions” problem in three ways. First, the initial distribution of simulated individuals is drawn directly from the data. Because households with retiree coverage are more likely to be wealthy in the data, households with retiree coverage are more likely to be wealthy in our initial distribution. Similarly, in our initial distribution, households with the high levels of education are more likely to have high values of the persistent wage shock ωt . Second, we model carefully the way in which pension and Social Security accrual varies across individuals and groups. Finally, we control for unobservable differences across health insurance groups by introducing permanent preference heterogeneity, using the approach introduced by Heckman and Singer (1984) and adapted by (among others) Keane and Wolpin (1997) and van der Klaauw and Wolpin (2008). Each individual is assumed to belong to one of a finite number of preference “types,” with the probability of belonging to a particular type a logistic function of the individual’s initial state vector: his age, wealth, initial wages, health status, health insurance type, medical expenditures, and preference index.12 We estimate the type probability parameters jointly with the preference parameters and the consumption floor. In our framework, correlations between preferences and health insurance emerge because people with different preferences systematically select jobs with different types of health insurance coverage. Workers in our data set are first observed in their 50s; by this age, all else equal, jobs that provide generous postretirement health insurance are more likely to be held by workers who wish to retire early. One way to measure this self-selection is to structurally model the choice of health insurance at younger ages, and use the predictions of that 12 These discrete type-based differences are the only preference heterogeneity in our model. For this reason many individuals in the data make decisions different from what the model would predict. Our MSM procedure circumvents this problem by using moment conditions that average across many individuals. One way to reconcile model predictions with individual observations is to introduce measurement error. In earlier drafts of this paper (French and Jones (2004b)) we considered this possibility by estimating a specification where we allowed for measurement error in assets. Adding measurement error, however, had little effect on either the preference parameter estimates or policy experiments, and we dropped this case.
704
E. FRENCH AND J. B. JONES
model to infer the correlation between preferences and health insurance in the first wave of the HRS. Because such an approach is computationally expensive, we instead model the correlation between preferences and health insurance in the initial conditions. 3.3. Wage Selection We estimate a selection-adjusted wage profile using the procedure developed in French (2005). First, we estimate a fixed-effects wage profile from HRS data, using the wages observed for individuals who are working. The fixedeffects estimator is identified using wage growth for workers. If wage growth rates for workers and nonworkers are the same, composition bias problems— the question of whether high wage individuals drop out of the labor market later than low wage individuals—are not a problem. However, if individuals leave the market because of a wage drop, such as from job loss, then wage growth rates for workers will be greater than wage growth for nonworkers. This selection problem will bias estimated wage growth upward. We control for selection bias by finding the wage profile that, when fed into our model, generates the same fixed-effects profile as the HRS data. Because the simulated fixed-effect profiles are computed using only the wages of those simulated agents that work, the profiles should be biased upward for the same reasons they are in the data. We find this bias-adjusted wage profile using the iterative procedure described in French (2005). 4. DATA AND CALIBRATIONS 4.1. HRS Data We estimate the model using data from the Health and Retirement Survey (HRS). The HRS is a sample of noninstitutionalized individuals, aged 51–61 in 1992, and their spouses. With the exception of assets and medical expenses, which are measured at the household level, our data are for male household heads. The HRS surveys individuals every 2 years, so that we have eight waves of data covering the period 1992–2006. The HRS also asks respondents retrospective questions about their work history that allow us to infer whether the individual worked in nonsurvey years. Details of this, as well as variable definitions, selection criteria, and a description of the initial joint distribution, are in Supplemental Material Appendix G. As noted above, the Social Security rules depend on an individual’s year of birth. To ensure that workers in our sample face a similar set of Social Security retirement rules, we fit our model to the data for the cohort of individuals aged 57–61 in 1992. However, when estimating the stochastic processes that individuals face, we use the full sample, plus Assets and Health Dynamics of the Oldest Old (AHEAD) data, which provides information on these processes at older ages. With the exception of wages, we do not adjust the data for cohort
EFFECTS OF INSURANCE ON RETIREMENT
705
effects. Because our subsample of the HRS covers a fairly narrow age range, this omission should not generate much bias. 4.2. Health Insurance and Medical Expenses We assign individuals to one of three mutually exclusive health insurance groups: retiree, tied, and none, as described in Section 2. Because of small sample problems, the none group includes those who have private health insurance as well as those who have no insurance at all. Both face high medical expenses because they lack employer-provided coverage. Private health insurance is a poor substitute for employer-provided coverage, as high administrative costs and adverse selection problems can result in prohibitively expensive premiums. Moreover, private insurance is much less likely to cover preexisting medical conditions. Because the model includes a consumption floor to capture the insurance provided by Medicaid, the none group also includes those who receive health care through Medicaid. We assign those who have health insurance provided by their spouse to the retiree group, along with those who report that they could keep their health insurance if they left their jobs. Both of these groups have health insurance that is not tied to their job. We assign individuals who would lose their employer-provided health insurance after leaving their job to the tied group. Supplemental Material Appendix H shows our estimated (health insurance-conditional) job exit rate profiles are robust to alternative coding decisions. The HRS has data on self-reported medical expenses. Medical expenses are the sum of insurance premia paid by households, drug costs, and out-of-pocket costs for hospital, nursing home care, doctor visits, dental visits, and outpatient care. Because our model explicitly accounts for government transfers, the appropriate measure of medical expenses includes expenses paid for by government transfers. Unfortunately, we observe only the medical expenses paid by households, not those paid by Medicaid. Therefore, we impute Medicaid payments for households that received Medicaid benefits, as described in Supplemental Material Appendix G. We fit these data to the medical expense model described in Section 2. Because of small sample problems, we allow the mean, m(·), and standard deviation, σ(·), to depend only on the individual’s Medicare eligibility, health insurance type, health status, labor force participation, and age. Following the procedure described in French and Jones (2004a), m(·) and σ(·) are set so that the model replicates the mean and 95th percentile of the cross-sectional distribution of medical expenses in each of these categories. Details are provided in Supplemental Material Appendix I. Table I presents summary statistics (in 1998 dollars), conditional on health status. Table I shows that for healthy individuals who are 64 years old, and thus not receiving Medicare, average annual medical expenses are $3,360 for workers with tied coverage and $6,010 for those with none, a difference of
706
E. FRENCH AND J. B. JONES TABLE I MEDICAL EXPENSES, BY MEDICARE AND HEALTH INSURANCE STATUS Retiree Working
Not Working
Tied Working
Not Working
None
Age = 64, without Medicare, Good Health Mean $3,160 $3,880 Standard deviation $5,460 $7,510 99.5th percentile $32,700 $44,300
$3,360 $5,040 $30,600
$5,410 $10,820 $63,500
$6,010 $15,830 $86,900
Age = 65, with Medicare, Good Health Mean $3,320 Standard deviation $4,740 99.5th percentile $28,800
$3,680 $5,590 $33,900
$3,830 $5,920 $35,800
$4,230 $9,140 $52,800
$4,860 $7,080 $43,000
Age = 64, without Medicare, Bad Health Mean $3,930 $4,830 Standard deviation $6,940 $9,530 99.5th percentile $41,500 $56,100
$4,170 $6,420 $38,900
$6,730 $13,740 $80,400
$7,470 $20,060 $109,500
Age = 65, with Medicare, Bad Health Mean $4,130 Standard deviation $6,030 99.5th percentile $36,600
$4,760 $7,530 $45,500
$5,260 $11,590 $66,700
$6,040 $9,020 $54,700
$4,580 $7,120 $43,000
$2,650. With the onset of Medicare at age 65, the difference shrinks to $1,030.13 Thus, the value of having employer-provided health insurance coverage largely vanishes at age 65. As Rust and Phelan (1997) emphasized, it is not just differences in mean medical expenses that determine the value of health insurance, but also differences in variance and skewness. If health insurance reduces medical expense volatility, risk-averse individuals may value health insurance at well beyond the cost paid by employers. To give a sense of the volatility, Table I also presents the standard deviation and 99.5th percentile of the medical expense distributions. Table I shows that for healthy individuals who are 64 years old, annual medical expenses have a standard deviation of $5,040 for workers with tied coverage and $15,830 for those with none, a difference of $10,790. With the onset of Medicare at age 65, the difference shrinks to $1,160. Therefore, Medicare not only reduces average medical expenses for those without employer-provided health insurance, it reduces medical expense volatility as well. The parameters for the idiosyncratic process ψt , (σξ2 σε2 ρm ), are taken from French and Jones (2004a, “fitted” specification). Table II presents the parame13 The pre-Medicare cost differences are roughly comparable to The Employee Benefit Research Institute (EBRI) (1999) estimated that employers on average contribute $3,288 per year to their employees’ health insurance. They are larger than Gustman and Steinmeier’s (1994) estimate that employers contribute about $2,500 per year before age 65 (1977 NMES data, adjusted to 1998 dollars with the medical component of the consumer price index (CPI)).
EFFECTS OF INSURANCE ON RETIREMENT
707
TABLE II VARIANCE AND PERSISTENCE OF INNOVATIONS TO MEDICAL EXPENSES
Parameter
ρm σε2 σξ2
Variable
Estimate (Standard Errors)
Autocorrelation of persistent component Innovation variance of persistent component Innovation variance of transitory component
0.925 (0.003) 0.04811 (0.008) 0.6668 (0.014)
ters, which have been normalized so that the overall variance, σψ2 , is 1. Table II reveals that at any point in time, the transitory component generates almost 67% of the cross-sectional variance in medical expenses. The results in French and Jones reveal, however, that most of the variance in cumulative lifetime medical expenses is generated by innovations to the persistent component. For this reason, the cross-sectional distribution of medical expenses reported in Table I understates the lifetime risk of medical expenses. Given the autocorrelation coefficient ρm of 0.925, this is not surprising. 4.3. Pension Accrual Supplemental Material Appendix D describes how we use confidential HRS pension data to construct the accrual rate formula. Figure 1 shows the average pension accrual rates generated by this formula when we simulate the model.
FIGURE 1.—Average pension accrual rates by age and health insurance coverage.
708
E. FRENCH AND J. B. JONES
Figure 1 reveals that workers with retiree coverage face the sharpest drops in pension accrual after age 60.14 While retiree coverage in and of itself provides an incentive for early retirement, the pension plans associated with retiree coverage also provide the strongest incentives for early retirement. Failing to capture this link will lead the econometrician to overstate the effect of retiree coverage on retirement. 4.4. Preference Index To better measure preference heterogeneity in the population (and how it is correlated with health insurance), we estimate a person’s “willingness” to work using three questions from the first (1992) wave of the HRS. The first question asks the respondent the extent to which he agrees with the statement, “Even if I didn’t need the money, I would probably keep on working.” The second question asks the respondent, “When you think about the time when you will retire, are you looking forward to it, are you uneasy about it, or what?” The third question asks, “How much do you enjoy your job?” To combine these three questions into a single index, we regress waves 5–7 (survey years 2000–2004) participation on the response to the three questions along with polynomials and interactions of all the state variables in the model: age, health status, wages, wealth, AIME, medical expenses, and health insurance type. Multiplying the numerical responses to the three questions by their respective estimated coefficients and summing yields an index. We then discretize the index into three values: high for the top 50% of the index for those working in wave 1; low for the bottom 50% of the index for those working in wave 1; out for those not working in wave 1. Supplemental Material Appendix J provides additional details on the construction of the index. Figure 6 below shows that the index has great predictive power: at age 65, participation rates are 56% for those with an index of high, 39% for those with an index of low, and 12% for those with an index of out. 4.5. Wages Recall from equation (11) that ln Wt = α ln(Nt ) + W (Ht t) + ωt Following Aaronson and French (2004), we set α = 0415 which implies that a 50% drop in work hours leads to a 25% drop in the offered hourly wage. This is in the middle of the range of estimates of the effect of hours worked on the offered hourly wage. We estimate W (Ht t) using the methodology described in Section 3.3. 14
Because Figure 1 is based on our estimation sample, it does not show accrual rates for earlier ages. Estimates that include the validation sample show, however, that those with retiree coverage have the highest pension accrual rates in their early and middle 50s.
EFFECTS OF INSURANCE ON RETIREMENT
709
The parameters for the idiosyncratic process ωt , (ση2 ρW ), are estimated by French (2005). The results indicate that the autocorrelation coefficient ρW is 0.977; wages are almost a random walk. The estimate of the innovation variance ση2 is 0.0141; 1 standard deviation of an innovation in the wage is 12% of wages. 4.6. Remaining Calibrations We set the interest rate r equal to 003. Spousal income depends on an age polynomial and health status. Health status and mortality both depend on previous health status interacted with an age polynomial. 5. DATA PROFILES AND INITIAL CONDITIONS 5.1. Data Profiles Figure 2 presents some of the labor market behavior we want our model to explain. The top panel of Figure 2 shows empirical job exit rates by health insurance type. Recall that Medicare should provide the largest labor market incentives for workers who have tied health insurance. If these people place a high value on employer-provided health insurance, they should either work until age 65, when they are eligible for Medicare, or they should work until age 63.5 and use COBRA coverage as a bridge to Medicare. The job exit profiles provide some evidence that those who have tied coverage do tend to work until age 65. While the age-65 job exit rate is similar for those whose health insurance type is tied (20%), retiree (17%), or none (18%), those with retiree coverage have higher exit rates at 62 (22%) than those with tied (14%) or none (18%).15 At almost every age other than 65, those who have retiree coverage have higher job exit rates than those with tied or no coverage. These differences across health insurance groups, while large, are smaller than the differences in the empirical exit profiles reported by Rust and Phelan (1997). The low job exit rates before age 65 and the relatively high job exit rates at age 65 for those who have tied coverage suggests that some people who have tied coverage are working until age 65, when they become eligible for Medicare. On the other hand, job exit rates for those who have tied coverage are lower than those who have retiree coverage for every age other than 65, and are not much higher at age 65. This suggests that differences in health insurance coverage may not be the only reason for the differences in job exit rates. 15
The differences across groups are statistically different at 62, but not at 65. Furthermore, F -tests reject the hypothesis that the three groups have identical exit rates at all ages at the 5% level.
710
E. FRENCH AND J. B. JONES
FIGURE 2.—Job exit and participation rates: data.
The bottom panel of Figure 2 presents observed labor force participation rates. In comparing participation rates across health insurance categories, it is useful to keep in mind the transitions implied by equation (10): retiring workers in the tied insurance category transition into the none category. Because of this, the labor force participation rates for those who have tied insurance are calculated for a group of individuals who were all working in the previous period. It is thus not surprising that the tied category has the highest participation rates. Conversely, it is not surprising that the none category has the lowest participation rates, given that it includes tied workers who retire.
711
EFFECTS OF INSURANCE ON RETIREMENT
5.2. Initial Conditions Each artificial individual in our model begins its simulated life with the year1992 state vector of an individual, aged 57–61 in 1992, observed in the data. Table III summarizes this initial distribution, the construction of which is described in Supplemental Material Appendix G. Table III shows that individuals with retiree coverage tend to have the most asset and pension wealth, while individuals in the none category have the least. The median individual in the none category has no pension wealth at all. Individuals in the none category are also more likely to be in bad health and, not surprisingly, less likely to be working. In contrast, individuals who have tied coverage have high values of the preference index, suggesting that their delayed retirement reflects differences in preferences as well as in incentives.
TABLE III SUMMARY STATISTICS FOR THE INITIAL DISTRIBUTION Retiree
Tied
None
58.7 1.5
58.6 1.5
58.7 1.5
AIME (in thousands of 1998 dollars) Mean 24.9 Median 27.2 Standard deviation 9.1
24.9 26.9 8.6
16.0 16.2 9.0
Assets (in thousands of 1998 dollars) Mean 231 Median 147 Standard deviation 248
205 118 251
203 53 307
Pension wealth (in thousands of 1998 dollars) Mean 129 Median 62 Standard deviation 180
80 17 212
17 0 102
Wage (in 1998 dollars) Mean Median Standard deviation
17.4 14.7 13.4
17.6 14.6 12.4
12.0 8.6 11.2
Preference index Fraction out Fraction low Fraction high
0.27 0.42 0.32
0.04 0.44 0.52
0.48 0.19 0.33
Fraction in bad health Fraction working Number of observations
0.20 0.73 1,022
0.13 0.96 225
0.41 0.52 455
Age Mean Standard deviation
712
E. FRENCH AND J. B. JONES
6. BASELINE RESULTS 6.1. Preference Parameter Estimates The goal of our MSM estimation procedure is to match the life-cycle profiles for assets, hours, and participation found in the HRS data. To use these profiles to identify preferences, we make several identifying assumptions, the most important being that preferences vary with age in two specific ways: (i) through changes in health status and (ii) through the linear time trend in the fixed cost φPt . Therefore, age can be thought of as an “exclusion restriction,” which changes the incentives for work and savings in ways that cannot be captured with changes in preferences. Table IV presents preference parameter estimates. The first three rows of Table IV show the parameters that vary across the preference types. We assume that there are three types of individuals, and that the types differ in the utility weight on consumption, γ, and their time discount factor, β. Individuals who have high values of γ have stronger preferences for work; individuals TABLE IV ESTIMATED STRUCTURAL PARAMETERSa Parameters That Vary Across Individuals
Type 0
Type 1
Type 2
γ: Consumption weight
0.412 (0.045)
0.649 (0.007)
0.967 (0.203)
β: Time discount factor
0.945 (0.074)
0.859 (0.013)
1.124 (0.328)
Fraction of individuals
0.267
0.615
0.118
Parameters That Are Common to All Individuals
ν: Coefficient of relative risk aversion, utility
7.49 (0.312)
θB : Bequest weightb
κ: Bequest shifter, in thousands
444 (28.2)
cmin : Consumption floor
4,380 (167)
L: Leisure endowment, in hours
4,060 (44)
φH : Hours of in hours bad health
506 (20.9)
φP0 : Fixed cost of work at age 60, in hours
826 (20.0)
φP1 : Fixed cost of work: age trend, in hours
54.7 (2.57)
φRE : Hours of leisure lost when reentering labor market
94.0 (8.64)
χ2 Statistic = 751
0.0223 (0.0012)
Degrees of freedom = 171
a Method of simulated moments estimates. Estimates use a diagonal weighting matrix (see Supplemental Material Appendix F for details). Standard errors are given in parentheses. Parameters are estimated jointly with type prediction equation. The estimated coefficients for the type prediction equation are shown in Supplemental Material Appendix K. b Parameter expressed as the marginal propensity to consume out of final-period wealth.
EFFECTS OF INSURANCE ON RETIREMENT
713
who have high values of β are more patient and thus more willing to defer consumption and leisure. Table IV reveals significant differences in γ and β across preference types, which are discussed in some detail in Section 6.2. Table IV also shows the fraction of workers who belong to each preference type; the coefficients for the preference type prediction equation are shown in Supplemental Material Appendix K. Averaging over the three types reveals that the average value of β, the discount factor, implied by our model is 0.913, which is slightly lower than most estimates. The discount factor is identified by the intertemporal substitution of consumption and leisure, as embodied in the asset and labor supply profiles. Another key parameter is ν, the coefficient of relative risk aversion for the consumption–leisure composite. A more familiar measure of risk aversion is the coefficient of relative risk aversion for consumption. Assuming that labor 2 U/∂C 2 )C = −(γ(1 − ν) − 1). The supply is fixed, it can be approximated as − (∂ ∂U/∂C weighted average value of the coefficient is 5.0. This value falls within the range of estimates found in recent studies by Cagetti (2003) and French (2005), but it is larger than the values of 1.1, 1.8, and 1.0 reported by Rust and Phelan (1997), Blau and Gilleskie (2006), and Blau and Gilleskie (2008), respectively, in their studies of retirement. The risk coefficient ν and the consumption floor Cmin are identified in large part by the asset quantiles, which reflect precautionary motives. The bottom quantile in particular depends on the interaction of precautionary motives and the consumption floor. If the consumption floor is sufficiently low, the risk of a catastrophic medical expense shock, which over a lifetime could equal over $100,000 (see French and Jones (2004a)), will generate strong precautionary incentives. Conversely, as emphasized by Hubbard, Skinner, and Zeldes (1995), a high consumption floor discourages saving among the poor, since the consumption floor effectively imposes a 100% tax on the savings of those with high medical expenses and low income and assets. Our estimated consumption floor of $4,380 is similar to other estimates of social insurance transfers for the indigent. For example, when we use Hubbard, Skinner, and Zeldes’s (1994, Appendix A) procedures and more recent data, we find that the average benefit available to a childless household with no members aged 65 or older was $3,500. A value of $3,500 understates the benefits available to individuals over age 65; in 1998, the Federal SSI benefit for elderly (65+) couples was nearly $9,000 (Committee on Ways and Means (2000, p. 229)).16 On the other hand, about half of eligible households do not collect SSI benefits (Elder and Powers (2006, Table 2)), possibly because transactions or “stigma” costs outweigh the value of public assistance. Low take-up 16 Our framework also lacks explicit disability insurance. A recent structural analysis of this program is Low and Pistaferri (2010).
714
E. FRENCH AND J. B. JONES
rates, along with the costs that probably underlie them, suggest that the effective consumption floor need not equal statutory benefits. The bequest parameters θB and κ are identified largely from the top asset quantile. It follows from equation (3) that when the shift parameter κ is large, the marginal utility of bequests will be lower than the marginal utility of consumption unless the individual is rich. In other words, the bequest motive mainly affects the saving of the rich; for more on this point, see De Nardi (2004). Our estimate of θB implies that the marginal propensity to consume out of wealth in the final period of life (which is a nonlinear function of θB , β, γ, ν, and κ) is 1 for low income individuals and 0.022 for high income individuals. Turning to labor supply, we find that individuals in our sample are willing to intertemporally substitute their work hours. In particular, simulating the effects of a 2% wage change reveals that the wage elasticity of average hours is 0.486 at age 60. This relatively high labor supply elasticity arises because the fixed cost of work generates volatility on the participation margin. The participation elasticity is 0.353 at age 60, implying that wage changes cause relatively small hours changes for workers. For example, the Frisch labor supply elasticity of a type-1 individual working 2,000 hours per year at age 60 is approximated 1 P0 × (1−γ)(1−ν)−1 = 019. as − L−NNt −φ t The fixed cost of work at age 60, φP0 , is 826 hours per year, and it increases by φP1 = 55 hours per year. The fixed cost of work is identified by the life-cycle profile of hours worked by workers. Average hours of work (available upon request) do not drop below 1,000 hours per year (or 20 hours per week, 50 weeks per year) even though labor force participation rates decline to near zero. In the absence of a fixed cost of work, one would expect hours worked to parallel the decline in labor force participation (Rogerson and Wallenius (2009)). The time endowment L is identified by the combination of participation and hours profiles. The time cost of bad health, φH , is identified by noting that unhealthy individuals work fewer hours than healthy individuals, even after conditioning on wages. The reentry cost, φRE , of 94 hours is identified by exit rates. In the absence of a reentry cost, workers are more willing to “churn” in and out of the labor force, raising exit rates. 6.2. Preference Heterogeneity and Health Insurance Table IV shows considerable heterogeneity in preferences. To understand these differences, Table V shows simulated summary statistics for each of the preference types. Table V reveals that type-0 individuals have the lowest value of γ, that is, they place the highest value on leisure: 92% of type-0 individuals were out of the labor force in wave 1. Type-2 individuals, in contrast, have the highest value of γ: 84% of type-2 individuals have a preference index of high, meaning that they were working in wave 1 and self-reported having a low preference for leisure. Type-1 individuals fall in the middle, valuing leisure
715
EFFECTS OF INSURANCE ON RETIREMENT TABLE V MEAN VALUES BY PREFERENCE TYPE: SIMULATIONS Type 0
Type 1
Type 2
Key Preference Parameters γa βa
0.412 0.945
0.649 0.859
0.967 1.124
Means by Preference Type Assets ($1,000s) Pension Wealth ($1,000s) Wages ($/hour)
150 92 11.3
215 97 19.0
405 74 11.1
Probability of Health Insurance Type, Given Preference Type Health insurance = none 0.371 0.222 Health insurance = retiree 0.607 0.603 Health insurance = tied 0.023 0.175
0.261 0.581 0.158
Probability of Preference Index Value, Given Preference Type Preference index = out 0.922 0.068 Preference index = low 0.039 0.539 Preference index = high 0.039 0.392 Fraction of individuals 0.267 0.615
0.034 0.131 0.835 0.118
a Values of β and γ are from Table IV.
less than type-0 individuals, but more than type-2 individuals: 54% of type-1 individuals have a preference index value of low. Including preference heterogeneity allows us to control for the possibility that workers with different preferences select jobs with different health insurance packages. Table V suggests that some self-selection is occurring, as it reveals that while 14% of workers with tied coverage are type-2 agents, who have the lowest disutility of work, only 5% are type-0 agents, who have the highest disutility. In contrast, 11% of workers with retiree coverage are type-2 agents and 27% are type-0 agents. This suggests that workers who have tied coverage might be more willing to retire later than those who have retiree coverage because they have a lower disutility of work. However, Section 6.4 shows that accounting for this correlation has little impact on the estimated effect of health insurance on retirement. 6.3. Simulated Profiles The bottom of Table IV displays the overidentification test statistic. Even though the model is formally rejected, the life-cycle profiles generated by the model match up well with the life-cycle profiles found in the data. Figure 3 shows the 1/3 and 2/3 asset quantiles at each age for the HRS sample and for the model simulations. For example, at age 64 about 1/3 of the men in our sample live in households with less than $80,000 in assets, and about 1/3 live in households with over $270,000 of assets. Figure 3 shows that the
716
E. FRENCH AND J. B. JONES
FIGURE 3.—Asset quantiles: data and simulations.
model fits both asset quantiles well. The model is able to fit the lower quantile in large part because of the consumption floor of $4,350; the predicted 1/3 quantile rises when the consumption floor is lowered. The three panels in the left hand column of Figure 4 show that the model is able to replicate the two key features of how labor force participation varies with age and health insurance. The first key feature is that participation declines with age, and the declines are especially sharp between ages 62 and 65. The model underpredicts the decline in participation at age 65 (a 4.9 percentage point decline in the data versus a 3.5 percentage point decline predicted by the model), but comes closer at age 62 (a 10.6 percentage point decline in the data versus a 10.9 percentage point decline predicted by the model). The second key feature is that there are large differences in participation and job exit rates across health insurance types. The model does a good job of replicating observed differences in participation rates. For example, the model matches the low participation levels of the uninsured. Turning to the lower left panel of Figure 5, the data show that the group with the lowest participation rates are the uninsured with low assets. The model is able to replicate this fact because of the consumption floor. Without a high consumption floor, the risk of catastrophic medical expenses, in combination with risk aversion, would cause the uninsured to remain in the labor force and accumulate a buffer stock of assets. The panels in the right hand column of Figure 4 compare observed and simulated job exit rates for each health insurance type. The model does a good job of fitting the exit rates of workers with retiree or tied coverage. For example, the model captures the high age-62 job exit rates for those with retiree coverage and the high age-65 job exit rates for those with tied coverage. How-
EFFECTS OF INSURANCE ON RETIREMENT
717
FIGURE 4.—Participation and job exit rates: data and simulations.
ever, it fails to capture the high exit rates at age 65 for workers with no health insurance. Figure 6 shows how participation differs across the three values of the discretized preference index constructed from HRS attitudinal questions. Recall that an index value of out implies that the individual was not working in 1992. Not surprisingly, participation for this group is always low. Individuals who
718
E. FRENCH AND J. B. JONES
FIGURE 5.—Labor force participation rates by asset grouping: data and simulations.
have positive values of the preference index differ primarily in the rate at which they leave the labor force. Although low-index individuals initially work as much as high-index individuals, they leave the labor force more quickly. As noted in our discussion of the preference parameters, the model replicates these differences by allowing the taste for leisure (γ) and the discount rate (β) to vary across preference types. When we do not allow for preference heterogeneity, the model is unable to replicate the patterns observed in Figure 6.
EFFECTS OF INSURANCE ON RETIREMENT
719
FIGURE 6.—Labor force participation rates by preference index: data and simulations.
This highlights the importance of the preference index in identifying preference heterogeneity. 6.4. The Effects of Employer-Provided Health Insurance The labor supply patterns in Figures 2 and 4 show that those who have retiree coverage retire earlier than those who have tied coverage. However, the profiles do not identify the effects of health insurance on retirement, for three reasons. First, as shown in Table III, those who have retiree coverage have greater pension wealth than other groups. Second, as shown in Figure 1, pension plans for workers who have retiree coverage provide stronger incentives for early retirement than the pension plans held by other groups. Third, as shown in Table V, preferences for leisure vary by health insurance type. In short, retirement incentives differ across health insurance categories for reasons unrelated to health insurance incentives. To isolate the effects of employer-provided health insurance on labor supply, we conduct some additional simulations. We give everyone the pension accrual rates of tied workers so that pension incentives are identical across health insurance types. We then simulate the model twice, assuming first that all workers have retiree health insurance coverage at age 59 and then assuming tied coverage at age 59. Across the two simulations, households face different medical expense distributions, but in all other dimensions the distribution of incentives and preferences is identical. This exercise reveals that if all workers had retiree coverage rather than tied coverage, the job exit rate at age 62 would be 8.5 percentage points higher. In contrast, the raw difference in model-predicted exit rates at age 62 is 10.5 percentage points. (The raw difference in the data is 8.2 percentage points.) The high age-62 exit rates of those who have retiree coverage are thus partly due to more generous pensions and stronger preferences for leisure. Even after controlling for these factors, however, health insurance is still an important determinant of retirement.
720
E. FRENCH AND J. B. JONES
The effects of health insurance can also be measured by comparing participation rates. We find that the labor force participation rate for ages 60–69 would be 5.1 percentage points lower if everyone had retiree, rather than tied, coverage at age 59. Furthermore, moving everyone from retiree to tied coverage increases the average retirement age (defined as the oldest age at which the individual works plus 1) by 0.34 years. In comparison, Blau and Gilleskie’s (2001) reduced-form estimates imply that having retiree coverage, rather than tied coverage, increases the job exit rate 7.5 percentage points at age 61. Blau and Gilleskie also found that accounting for selection into health insurance plans modestly increases the estimated effect of health insurance on exit rates. Other reduced-form findings in the literature are qualitatively similar to Blau and Gilleskie. For example, Madrian, Burtless, and Gruber (1994) found that retiree coverage reduces the retirement age by 0.4–1.2 years, depending on the specification and the data employed. Karoly and Rogowski (1994), who attempted to account for selection into health insurance plans, found that retiree coverage increases the job exit rate 8 percentage points over a 2 12 -year period. Our estimates, therefore, lie within the lower bound of the range established by previous reduced-form studies, giving us confidence that the model can be used for policy analysis. Structural studies that omit medical expense risk find smaller health insurance effects than we do. For example, Gustman and Steinmeier (1994) found that retiree coverage reduces years in the labor force by 0.1 years. Lumsdaine, Stock, and Wise (1994) found even smaller effects. Structural studies that include medical expense risk but omit self-insurance find bigger effects. Our estimated effects are larger than Blau and Gilleskie’s (2006, 2008), who found that retiree coverage reduces average labor force participation 1.7 and 1.6 percentage points, respectively, but are smaller than the effects found by Rust and Phelan (1997).17 6.5. Model Validation Following several recent studies (e.g., Keane and Wolpin (2007)), we perform an out-of-sample validation exercise. Recall that we estimate the model on a cohort of individuals aged 57–61 in 1992. We test our model by considering the HRS cohort aged 51–55 in 1992; we refer to this group as our validation cohort. These individuals faced different Social Security incentives than did the estimation cohort. The validation cohort did not face the Social Security earnings test after age 65, had a later full retirement age, and faced a benefit 17
Blau and Gilleskie (2006) considered the retirement decision of couples, and allowed husbands and wives to retire at different dates. Blau and Gilleskie (2008) allowed workers to choose their medical expenses. Because these modifications provide additional mechanisms for smoothing consumption over medical expense shocks, they could reduce the effect of employer-provided health insurance.
721
EFFECTS OF INSURANCE ON RETIREMENT TABLE VI PARTICIPATION RATES BY BIRTH YEAR COHORT Data
Model
1933
1939
Differencea
1933
1939
Differencea
60 61 62 63 64 65 66 67
0.657 0.636 0.530 0.467 0.408 0.358 0.326 0.314
0.692 0.642 0.545 0.508 0.471 0.424 0.382 0.374
0.035 0.006 0.014 0.041 0.063 0.066 0.057 0.060
0.650 0.622 0.513 0.456 0.413 0.378 0.350 0.339
0.706 0.677 0.570 0.490 0.449 0.459 0.430 0.386
0.056 0.055 0.057 0.035 0.037 0.082 0.080 0.047
Total, 60–67
3.696
4.037
0.341
3.721
4.168
0.447
a The 1939 column minus the 1933 column.
adjustment formula that more strongly encouraged delayed retirement. In addition to facing different Social Security rules, the validation cohort possessed different endowments of wages, wealth, and employer benefits. A useful test of our model, therefore, is to see if it can predict the behavior of the validation cohort. The Data columns of Table VI show the participation rates observed in the data for each cohort and the difference between them. The data suggest that the change in the Social Security rules coincides with increased labor force participation, especially at later ages. By way of comparison, Song and Manchester (2007), examining Social Security administrative data, found that between 1996 and 2003, participation rates increased by 3, 4, and 6 percentage points for workers turning 62–64, 65, and 66–69, respectively. These differences are similar to the differences between the 1933 and 1939 cohorts in our data, as shown in the fourth column. The Model columns of Table VI show the participation rates predicted by the model. The simulations for the validation cohort use the initial distribution and Social Security rules for the validation cohort, but use the parameter values estimated on the older estimation cohort.18 Comparing the diference columns shows that the model-predicted increase in labor supply (0.45 years) resembles the increase observed in the data (0.35 years). 18 We do not adjust for business cycle conditions. Because the validation cohort starts at age 53, 6 years before the estimation cohort, the validation exercise requires its own wage selection adjustment and pension prediction equation. Using the baseline preference estimates, we construct these inputs in the same way we construct their baseline counterparts. In addition, we adjust the intercept terms in the type prediction equations so that the validation cohort generates the same distribution of preference types as the estimation sample.
722
E. FRENCH AND J. B. JONES
7. POLICY EXPERIMENTS The preceding sections showed that the model fits the data well, given plausible preference parameters. In this section, we use the model to predict how changing the Social Security and Medicare rules would affect retirement behavior. The results of these experiments are summarized in Table VII. The first data column of Table VII shows model-predicted labor market participation at ages 60–69 under the 1998 Social Security rules. Under the 1998 rules, the average person works a total of 4.29 years over this 10-year period. The last column of Table VII shows that this is close to the total of 4.28 years observed in the data. The Social Security rules are slowly evolving over time. If current plans continue, by 2030 the normal Social Security retirement age, the date at which workers can receive “full benefits,” will have risen from 65 to 67. Raising the normal retirement age to 67 effectively eliminates 2 years of Social Security benefits. The second data column shows the effect of this change.19 The wealth effect of lower benefits leads years of work to increase by 0.076 years, to 4.37 years.20 TABLE VII EFFECTS OF CHANGING THE SOCIAL SECURITY RETIREMENT AND MEDICARE ELIGIBILITY AGESa SS = 65 MC = 65
SS = 67b MC = 65
SS = 65 MC = 67
SS = 67b MC = 67
Data
60 61 62 63 64 65 66 67 68 69
0.650 0.622 0.513 0.456 0.413 0.378 0.350 0.339 0.307 0.264
0.651 0.625 0.526 0.469 0.426 0.386 0.358 0.346 0.311 0.270
0.651 0.623 0.516 0.460 0.422 0.407 0.374 0.341 0.307 0.264
0.652 0.626 0.530 0.472 0.433 0.415 0.381 0.347 0.312 0.270
0.657 0.636 0.530 0.467 0.407 0.358 0.326 0.314 0.304 0.283
Total 60–69
4.292
4.368
4.366
4.438
4.283
a SS = Social Security normal retirement age; MC = Medicare eligibility age. b Benefits reduced by 2 years, as described in text.
19 Under the 2030 rules, an individual claiming benefits at age 65 would receive an annual benefit 13.3% smaller than the benefit he would have received under the 1998 rules (holding AIME constant). We thus implement the 2-year reduction in benefits by reducing annual benefits by 13.3% at every age. 20 In addition to reducing annual benefits, the intended 2030 rules would impose two other changes. First, the rate at which benefits increase for delaying retirement past the normal age
EFFECTS OF INSURANCE ON RETIREMENT
723
The third data column of Table VII shows participation when the Medicare eligibility age is increased to 67.21 Over a 10-year period, total years of work increase by 0.074 years, so that the average probability of employment increases by 0.74 percentage points per year. This amount is larger than the changes found by Blau and Gilleskie (2006), whose simulations show that increasing the Medicare age increases the average probability of employment by 0.1 percentage points, but is smaller than the effects suggested by Rust and Phelan’s (1997) analysis. The fourth data column shows the combined effect of cutting Social Security benefits and raising the Medicare eligibility age. The joint effect is an increase of 0.146 years, 0.072 years more than that generated by cutting Social Security benefits in isolation. In summary, the model predicts that raising the Medicare eligibility age will have almost the same effect on retirement behavior as the benefit reductions associated with a higher Social Security retirement age. Medicare has an even bigger effect on those who have tied coverage at age 59.22 Simulations reveal that for those who have tied coverage, eliminating 2 years of Social Security benefits increases years in the labor force by 0.12 years, whereas shifting forward the Medicare eligibility age to 67 would increase years in the labor force by 0.28 years. To understand better the incentives generated by Medicare, we compute the value type-1 individuals place on employer-provided health insurance by finding the increase in assets that would make an uninsured type-1 individual as well off as a person with retiree coverage. In particular, we find the compensating variation λt = λ(At Bt Ht AIMEt ωt ζt−1 t), where Vt (At Bt Ht AIMEt ωt ζt−1 retiree) = Vt (At + λt Bt Ht AIMEt ωt ζt−1 none) would increase from 5.5% to 8.0%. This change, like the reduction in annual benefits, should encourage work. However, raising the normal retirement age implies that the relevant earnings test for ages 65–66 would become the stricter, early-retirement test. This change should discourage work. We find that when we switch from the 1998 to the 2030 rules, the effects of the three changes cancel out, so that total hours over ages 60–69 are essentially unchanged. 21 By shifting forward the Medicare eligibility age to 67, we increase from 65 to 67 the age at which medical expenses can follow the “with Medicare” distribution shown in Table I. 22 Only 13% of the workers in our sample had tied coverage at age 59. In contrast, Kaiser/HRET (2006) estimated that about 50% of large firms offered tied coverage in the mid1990s. We might understate the share with tied coverage because, as shown in the Kaiser/HRET study, the fraction of workers with tied (instead of retiree) coverage grew rapidly in the 1990s, and our health insurance measure is based on wave-1 data collected in 1992. In fact, the HRS data indicate that later waves had a higher proportion of individuals with tied coverage than wave 1. We may also be understating the share with tied coverage because of changes in the wording of the HRS questionnaire; see Supplemental Material Appendix H for details.
724
E. FRENCH AND J. B. JONES TABLE VIII VALUE OF EMPLOYER-PROVIDED HEALTH INSURANCEa Compensating Assets
Asset Levels
Baseline Case −$5,700 $51,600 $147,200 $600,000 No-Saving Casesb (a) −$6,000 (b) −$6,000
With Uncertainty
Without Uncertainty
Compensating Annuity With Uncertainty
Without Uncertainty
$20,400 $19,200 $21,400 $16,700
$10,700 $10,900 $10,600 $11,900
$4,630 $4,110 $4,180 $2,970
$2,530 $2,700 $2,540 $2,360
$112,000 $21,860
$8,960 $6,860
$11,220 $3,880
$2,160 $2,170
a Compensating variation between retiree and none coverages for agents with type-1 preferences. Calculations described in text. b No-Saving case (a) uses benchmark preference parameter values; case (b) uses parameter values estimated for no-saving specification.
Table VIII shows the compensating variation λ(At 0 good $32,000 0 0 60) at several different asset (At ) levels.23 The first data column of Table VIII shows the valuations found under the baseline specification. One of the most striking features is that the value of employer-provided health insurance is fairly constant through much of the wealth distribution. Even though richer individuals can better self-insure, they also receive less protection from the government-provided consumption floor. These effects more or less cancel each other out over the asset range of −$5,700 to $147,000. However, individuals with asset levels of $600,000 place less value on retiree coverage, because they can better self-insure against medical expense shocks. Part of the value of retiree coverage comes from a reduction in average medical expenses—because retiree coverage is subsidized—and part comes from a reduction in the volatility of medical expenses—because it is insurance. To separate the former from the latter, we eliminate medical expense uncertainty, by setting the variance shifter σ(Ht It t Bt Pt ) to zero, and recompute λt , using the same state variables and mean medical expenses as before. Without medical expense uncertainty, λt is approximately $11,000. Comparing the two values of λt shows that for the typical worker (with $147,000 of assets), about half of the value of health insurance comes from the reduction of average medical expenses and half comes from the reduction of medical expense volatility. 23 In making these calculations, we remove health-insurance-specific differences in pensions, as described in Section 6.4. It is also worth noting that for the values of Ht and ζt−1 considered here, the conditional differences in expected medical expenses are smaller than the unconditional differences shown in Table I.
EFFECTS OF INSURANCE ON RETIREMENT
725
The first two data columns of Table VIII measure the lifetime value of health insurance as an asset increment that can be consumed immediately. An alternative approach is to express the value of health insurance as an illiquid annuity comparable to Social Security benefits. The last two columns show this “compensating annuity.”24 When the value of health insurance is expressed as an annuity, the fraction of its value attributable to reduced medical expense volatility falls from 50% (one-half) to about 40%. In most other respects, however, the asset and annuity valuations of health insurance have similar implications. To summarize, allowing for medical expense uncertainty greatly increases the value of health insurance. It is, therefore, unsurprising that we find larger effects of health insurance on retirement than do Gustman and Steinmeier (1994) and Lumsdaine, Stock, and Wise (1994), who assumed that workers value health insurance at its actuarial cost. 8. ALTERNATIVE SPECIFICATIONS To consider whether our findings are sensitive to our modelling assumptions, we reestimate the model under three alternate specifications.25 Table IX shows model-predicted participation rates under the different specifications, along with the data. The parameter estimates behind these simulations are shown in Supplemental Material Appendix K. The first data column of Table IX presents our baseline case. The second column presents the case where individuals are not allowed to save, the third column presents the case with no preference heterogeneity, and the fourth column presents the case where we remove the subjective preference index from the type prediction equations and the GMM criterion function. The last column presents the data. In general, the different specifications match the data profile equally well. Table X shows how total years of work over ages 60–69 are affected by changes in Social Security and Medicare under each of the alternative specifications. In all specifications, decreasing the Social Security benefits and raising the Medicare eligibility age increase years of work by similar amounts. 24
To do this, we first find compensating AIME, λt , where Vt (At Bt Ht AIMEt ωt ζt−1 retiree) λt ωt ζt−1 none) = Vt (At Bt Ht AIMEt +
This change in AIME in turn allows us to calculate the change in expected pension and Social Security benefits that the individual would receive at age 65, the sum of which can be viewed as a compensating annuity. Because these benefits depend on decisions made after age 60, the calculation is only approximate. 25 In earlier drafts of this paper (French and Jones (2004b, 2007)), we also estimated a specification where housing wealth is illiquid. Although parameter estimates and model fit for this case were somewhat different than our baseline results, the policy simulations were similar.
726
E. FRENCH AND J. B. JONES TABLE IX MODEL PREDICTED PARTICIPATION BY AGE: ALTERNATIVE SPECIFICATIONS
Homogeneous Preferences
No Preference Index
Data
Age
Baseline
No Saving
60 61 62 63 64 65 66 67 68 69
0.650 0.622 0.513 0.456 0.413 0.378 0.350 0.339 0.307 0.264
0.648 0.632 0.513 0.457 0.429 0.380 0.334 0.327 0.308 0.282
0.621 0.595 0.517 0.453 0.409 0.365 0.351 0.345 0.319 0.286
0.653 0.625 0.516 0.459 0.417 0.381 0.357 0.346 0.314 0.273
0.657 0.636 0.530 0.467 0.407 0.358 0.326 0.314 0.304 0.283
Total 60–69
4.292
4.309
4.260
4.340
4.283
8.1. No Saving We have argued that the ability to self-insure through saving significantly affects the value of employer-provided health insurance. One test of this hypothesis is to modify the model so that individuals cannot save, and examine how labor market decisions change. In particular, we require workers to consume their income net of medical expenses, as in Rust and Phelan (1997) and Blau and Gilleskie (2006, 2008). The second data column of Table IX contains the labor supply profile generated by the no-saving specification. Comparing this profile to the baseline case shows that, in addition to its obvious failings with respect to asset holdings, TABLE X EFFECTS OF CHANGING THE SOCIAL SECURITY RETIREMENT AND MEDICARE ELIGIBILITY AGES, AGES 60–69: ALTERNATIVE SPECIFICATIONSa
Rule Specification
Baseline: SS = 65, MC = 65 SS = 67: Lower benefitsb SS = 65, MC = 67 SS = 67b , MC = 67
Baseline
No Saving
Homogeneous Preferences
No Preference Index
4.292 4.368 4.366 4.438
4.309 4.399 4.384 4.456
4.260 4.335 4.322 4.395
4.340 4.411 4.417 4.482
a SS = Social Security normal retirement age; MC = Medicare eligibility age. b Benefits reduced by 2 years, as described in text.
EFFECTS OF INSURANCE ON RETIREMENT
727
the no-saving case matches the labor supply data no better than the baseline case.26 Table VIII displays two sets of compensating values for the no-saving case. Case (a), which uses the parameter values from the benchmark case, shows that eliminating the ability to save greatly increases the value of retiree coverage: when assets are −$6,000, the compensating annuity increases from $4,600 in the baseline case (with savings) to $11,200 in the no-savings case (a). When there is no medical expense uncertainty, the comparable figures are $2,530 in the baseline case and $2,160 in the no-savings case. Thus, the ability to self-insure through saving significantly reduces the value of employer-provided health insurance. Case (b) shows that using the parameter values estimated for the no-saving specification, which include a lower value of the risk parameter ν, also lowers the value of insurance. Simulating the responses to policy changes, we find that raising the Medicare eligibility age to 67 leads to an additional 0.075 years of work, an amount almost identical to that of the baseline specification. 8.2. No Preference Heterogeneity To assess the importance of preference heterogeneity, we estimate and simulate a model where individuals have identical preferences (conditional on age and health status). Comparing the first, third, and last data columns of Table IX shows that the model without preference heterogeneity matches aggregate participation rates as well as the baseline model. However, the no preference heterogeneity specification does much less well in replicating the way in which participation varies across the asset distribution, and, not surprisingly, does not replicate the way in which participation varies across our discretized preference index. When preferences are homogeneous the simulated response to delaying the Medicare eligibility age, 0.062 years, is similar to the response in the baseline specification. This is consistent with our analysis in Section 6.4, where not accounting for preference heterogeneity and insurance self-selection appeared to only modestly change the estimated effects of health insurance on retirement. 8.3. No Preference Index In the baseline specification, we use the preference index (described in Section 4.4) to predict preference type, and the GMM criterion function includes participation rates for each value of the index. Because labor force participation differs sharply across the index in ways not predicted by the model’s other 26 Because the baseline and no-saving cases are estimated with different moments, their overidentification statistics are not comparable. However, inserting the decision profiles generated by the baseline model into the moment conditions used to estimate the no-saving case produces an overidentification statistic of 354, while the no-saving specification produces an overidentification statistic of 366.
728
E. FRENCH AND J. B. JONES
state variables, we interpret the index as a measure of otherwise unobserved preferences toward work. It is possible, however, that using the preference index causes us to overstate the correlation between health insurance and tastes for leisure. For example, Table III shows that employed individuals with retiree coverage are more likely to have a preference index that is low than employed individuals with tied coverage. This means that workers with retiree coverage are more likely to report looking forward to retirement, and thus more likely to be assigned a higher desire for leisure. But workers with retiree coverage may be more likely to report looking forward to retirement simply because they would have health insurance and other financial resources during retirement. As a robustness test, we remove the preference index and the preference index-related moment conditions, and reestimate the model. Table XI contains summary statistics for the preference groups generated by this alternative specification. Comparing Table XI to the baseline results contained in Table V reveals that eliminating the preference index from the type prediction equations changes only modestly the parameter estimates and the distribution of insurance coverage across the three preference types. The model without the preference index provides less evidence of self-selection: when the preference index is removed the fraction of high preference for work, type-2 individuals with tied coverage falls from 15.8% to 8.9%. Table X shows that excluding the preference index only slightly changes the estimated effect of Medicare and Social Security on labor supply. Given that self-selection has only a small effect on our results when we include the prefTABLE XI MEAN VALUES BY PREFERENCE TYPE: ALTERNATIVE SPECIFICATION Type 0
Type 1
Type 2
Key Preference Parameters γ β
0.405 0.962
0.647 0.858
0.986 1.143
Means by Preference Type Assets ($1,000s) Pension Wealth ($1,000s) Wages ($/hour)
115 60 11.0
231 108 18.4
376 85 13.5
Probability of Health Insurance Type, Given Preference Type Health insurance = none 0.392 0.193 Health insurance = retiree 0.560 0.633 Health insurance = tied 0.047 0.174
0.394 0.518 0.089
Probability of Preference Index Value, Given Preference Type Preference index = out 0.523 0.216 Preference index = low 0.247 0.399 Preference index = high 0.230 0.385 Fraction with preference type 0.246 0.635
0.224 0.363 0.413 0.119
EFFECTS OF INSURANCE ON RETIREMENT
729
erence index, it should come as no surprise that self-selection has only a small effect when we exclude the index. 9. CONCLUSION Prior to age 65, many individuals receive health insurance only if they continue to work. At age 65, however, Medicare provides health insurance to almost everyone. Therefore, a potentially important work incentive disappears at age 65. To see if Medicare benefits have a large effect on retirement behavior, we construct a retirement model that includes health insurance, uncertain medical costs, a savings decision, a nonnegativity constraint on assets, and a government-provided consumption floor. Using data from the Health and Retirement Study, we estimate the structural parameters of our model. The model fits the data well, with reasonable preference parameters. In addition, the model does a satisfactory job of predicting the behavior of individuals who, by belonging to a younger cohort, face different Social Security rules than the individuals on which the model was estimated. We find that health care uncertainty significantly affects the value of employer-provided health insurance. Our calculations suggest that about half of the value workers place on employer-provided health insurance comes from its ability to reduce medical expense risk. Furthermore, we find evidence that individuals with higher tastes for leisure are more likely to choose employers who provide health insurance to early retirees. Nevertheless, we find that Medicare is important for understanding retirement, especially for workers whose health insurance is tied to their job. For example, the effects of raising the Medicare eligibility age to 67 are just as large as the effects of reducing Social Security benefits. APPENDIX A: CAST OF CHARACTERS Preference Parameters γ Consumption weight β Time discount factor ν Coefficient of relative risk aversion, utility θB Bequest weight κ Bequest shifter Cmin Consumption floor L Leisure endowment Leisure cost of bad health φH φPt Fixed cost of work Fixed cost, intercept φP0 Fixed cost, time trend φP1 Reentry cost φRE
Health-Related Parameters Health status Ht Out-of-pocket medical expenses Mt It Health insurance type m(·) Mean shifter, logged medical expenses σ(·) Volatility shifter, logged medical expenses Idiosyncratic medical expense shock ψt ζt Persistent medical expense shock εt Innovation, persistent shock ρm Autocorrelation, persistent shock σε2 Innovation variance, persistent shock ξt Transitory medical expense shock σξ2 Variance, transitory shock
730
E. FRENCH AND J. B. JONES
Decision Variables Consumption Ct Nt Hours of work Leisure Lt Pt Participation Assets At Bt Social Security application Financial Variables Y (·) After-tax income τ Tax parameter vector r Real interest rate yst Spousal income ys(·) Mean shifter, spousal income sst Social Security income Social Security wealth AIMEt pbt Pension benefits
Wage-Related Parameters Wt Hourly wage W (·) Mean shifter, logged wages α Coefficient on hours, logged wages ωt Idiosyncratic wage shock ρW Autocorrelation, wage shock ηt Innovation, wage shock Innovation variance, wage shock ση2 Miscellaneous Survival probability st pref Discrete preference index Xt State vector, worker’s problem λ(·) Compensating variation T Number of years in GMM criterion
REFERENCES AARONSON, D., AND E. FRENCH (2004): “The Effect of Part-Time Work on Wages: Evidence From the Social Security Rules,” Journal of Labor Economics, 22, 329–352. [708] BERKOVEC, J., AND S. STERN (1991): “Job Exit Behavior of Older Men,” Econometrica, 59, 189–210. [702] BLAU, D., AND D. GILLESKIE (2001): “Retiree Health Insurance and the Labor Force Behavior of Older Men in the 1990’s,” Review of Economics and Statistics, 83, 64–80. [720] (2006): “Health Insurance and Retirement of Married Couples,” Journal of Applied Econometrics, 21, 935–953. [694,696,698,713,720,723,726] (2008): “The Role of Retiree Health Insurance in the Employment Behavior of Older Men,” International Economic Review, 49, 475–514. [694,696,713,720,726] BOARDS OF TRUSTEES OF THE FEDERAL HOSPITAL INSURANCE AND FEDERAL SUPPLEMENTARY MEDICAL INSURANCE TRUST FUNDS (2010): 2010 Annual Report of the Boards of Trustees of the Federal Hospital Insurance and Federal Supplementary Medical Insurance Trust Funds. Washington, DC: Boards of Trustees of the Federal Hospital Insurance and Federal Supplementary Medical Insurance Trust Funds. Available at https:// www.cms.gov/ReportsTrustFunds/downloads/tr2010.pdf. [693] CAGETTI, M. (2003): “Wealth Accumulation Over the Life Cycle and Precautionary Savings,” Journal of Business & Economic Statistics, 21, 339–353. [713] CASANOVA, M. (2010): “Happy Together: A Structural Model of Couples’ Joint Retirement Decisions,” Working Paper, UCLA. [694] COGAN, J. (1981): “Fixed Costs and Labor Supply,” Econometrica, 49, 945–963. [696] COMMITTEE ON WAYS AND MEANS, U.S. HOUSE OF REPRESENTATIVES (2000): 2000 Green Book. Washington: U.S. Government Printing Office. [700,713] DE NARDI, M. (2004): “Wealth Inequality and Intergenerational Links,” Review of Economic Studies, 71, 743–768. [696,714] DE NARDI, M., E. FRENCH, AND J. JONES (2010): “Why Do the Elderly Save? The Role of Medical Expenses,” Journal of Political Economy, 118, 39–75. [698] ELDER, T., AND E. POWERS (2006): “The Incredible Shrinking Program: Trends in SSI Participation of the Aged,” Research on Aging, 28, 341–358. [713] EMPLOYEE BENEFIT RESEARCH INSTITUTE (1999): EBRI Health Benefits Databook. Washington: EBRI-ERF. [706]
EFFECTS OF INSURANCE ON RETIREMENT
731
EROSA, A., L. FUSTER, AND G. KAMBOUROV (2010): “Towards a Micro-Founded Theory of Aggregate Labor Supply,” Working Paper, IMDEA Social Sciences Institute and University of Toronto. Available at http://homes.chass.utoronto.ca/~gkambour/research/ labor_supply/EFK_labor_supply.pdf. [699] FRENCH, E. (2005): “The Effects of Health, Wealth and Wages on Labor Supply and Retirement Behavior,” Review of Economic Studies, 72, 395–427. [699,702,704,709,713] FRENCH, E., AND J. JONES (2004a): “On the Distribution and Dynamics of Health Care Costs,” Journal of Applied Econometrics, 19, 705–721. [698,705,706,713] (2004b): “The Effects of Health Insurance and Self-Insurance on Retirement Behavior,” Working Paper 2004-12, Center for Retirement Research. [703,725] (2007): “The Effects of Health Insurance and Self-Insurance on Retirement Behavior,” Working paper 2007-170, Michigan Retirement Research Center. [725] (2011): “Supplement to ‘The Effects of Health Insurance and Self-Insurance on Retirement Behavior,” Econometrica Supplemental Material, 79, http://www.econometricsociety. org/ecta/Supmat/7560_extensions.pdf; http://www.econometricsociety.org/ecta/Supmat/7560_ data and programs-1.zip; http://www.econometricsociety.org/ecta/Supmat/7560_data and programs-2.zip. [695] GOURINCHAS, P., AND J. PARKER (2002): “Consumption Over the Life Cycle,” Econometrica, 70, 47–89. [702] GRUBER, J., AND B. MADRIAN (1995): “Health Insurance Availability and the Retirement Decision,” American Economic Review, 85, 938–948. [699] (1996): “Health Insurance and Early Retirement: Evidence From the Availability of Continuation Coverage,” in Advances in the Economics of Aging, ed. by D. A. Wise. Chicago: University of Chicago Press, 115–143. [699] GUSTMAN, A., AND T. STEINMEIER (1994): “Employer-Provided Health Insurance and Retirement Behavior,” Industrial and Labor Relations Review, 48, 124–140. [694,706,720,725] (2005): “The Social Security Early Entitlement Age in a Structural Model of Retirement and Wealth,” Journal of Public Economics, 89, 441–463. [696,697] HECKMAN, J., AND B. SINGER (1984): “A Method for Minimizing the Impact of Distributional Assumptions in Econometric Models for Duration Data,” Econometrica, 52, 271–320. [703] HUBBARD, R., J. SKINNER, AND S. ZELDES (1994): “The Importance of Precautionary Motives in Explaining Individual and Aggregate Saving,” Carnegie–Rochester Series on Public Policy, 40, 59–125. [697,713] (1995): “Precautionary Saving and Social Insurance,” Journal of Political Economy, 103, 360–399. [697,713] KAHN, J. (1988): “Social Security, Liquidity, and Early Retirement,” Journal of Public Economics, 35, 97–117. [697] KAISER/HRET (2006): The 2006 Kaiser/HRET Employer Health Benefit Survey. Menlo Park, CA: Henry J. Kaiser Family Foundation and Chicago, IL: Health Research and Educational Trust. Available at http://www.kff.org/insurance/7527/upload/7527.pdf. [723] KAROLY, L., AND J. ROGOWSKI (1994): “The Effect of Access to Post-Retirement Health Insurance on the Decision to Retire Early,” Industrial and Labor Relations Review, 48, 103–123. [720] KEANE, M., AND K. WOLPIN (1997): “The Career Decisions of Young Men,” Journal of Political Economy, 105, 473–522. [703] (2007): “Exploring the Usefulness of a Non-Random Holdout Sample for Model Validation: Welfare Effects on Female Behavior,” International Economic Review, 48, 1351–1378. [695,720] LAIBSON, D., A. REPETTO, AND J. TOBACMAN (2007): “Estimating Discount Functions With Consumption Choices Over the Lifecycle,” Working Paper, Harvard University. [702] LOW, H., AND L. PISTAFERRI (2010): “Disability Risk Disability Insurance and Life Cycle Behavior,” Working Paper 15962, NBER. [713]
732
E. FRENCH AND J. B. JONES
LUMSDAINE, R., J. STOCK, AND D. WISE (1994): “Pension Plan Provisions and Retirement: Men, Women, Medicare and Models,” in Studies in the Economics of Aging, ed. by D. Wise. Chicago: University of Chicago Press. [694,720,725] MADRIAN, B., G. BURTLESS, AND J. GRUBER (1994): “The Effect of Health Insurance on Retirement,” Brookings Papers on Economic Activity, 1994, 181–252. [720] ROGERSON, R., AND J. WALLENIUS (2009): “Retirement in a Life Cycle Model of Labor Supply With Home Production,” Working Paper 2009-205, Michigan Retirement Research Center. [714] RUST, J., AND C. PHELAN (1997): “How Social Security and Medicare Affect Retirement Behavior in a World of Incomplete Markets,” Econometrica, 65, 781–831. [694,696,697,706,709,713, 720,723,726] RUST, J., M. BUCHINSKY, AND H. BENITEZ-SILVA (2003): “Dynamic Structural Models of Retirement and Disability,” Working Paper, University of Maryland, UCLA, and SUNY–Stony Brook. Available at http://ms.cc.sunysb.edu/~hbenitezsilv/newr02.pdf. [696] SONG, J., AND J. MANCHESTER (2007): “New Evidence on Earnings and Benefit Claims Following Changes in the Retirement Earnings Test in 2000,” Journal of Public Economics, 91, 669–700. [721] VAN DER KLAAUW, W., AND K. WOLPIN (2008): “Social Security and the Retirement and Savings Behavior of Low-Income Households,” Journal of Econometrics, 145, 21–42. [694,696,703]
Federal Reserve Bank of Chicago, 230 South LaSalle Street, Chicago, IL 60604, U.S.A.;
[email protected] and Dept. of Economics, University at Albany, SUNY, BA-110, Albany, NY 12222, U.S.A.;
[email protected]. Manuscript received November, 2007; final revision received January, 2010.
Econometrica, Vol. 79, No. 3 (May, 2011), 733–772
THE GRANULAR ORIGINS OF AGGREGATE FLUCTUATIONS BY XAVIER GABAIX1 This paper proposes that idiosyncratic firm-level shocks can explain an important part of aggregate movements and provide a microfoundation for aggregate shocks. Existing research has focused on using aggregate shocks to explain business cycles, arguing that individual firm shocks average out in the aggregate. I show that this argument breaks down if the distribution of firm sizes is fat-tailed, as documented empirically. The idiosyncratic movements of the largest 100 firms in the United States appear to explain about one-third of variations in output growth. This “granular” hypothesis suggests new directions for macroeconomic research, in particular that macroeconomic questions can be clarified by looking at the behavior of large firms. This paper’s ideas and analytical results may also be useful for thinking about the fluctuations of other economic aggregates, such as exports or the trade balance. KEYWORDS: Business cycle, idiosyncratic shocks, productivity, Solow residual, granular residual.
1. INTRODUCTION THIS PAPER PROPOSES a simple origin of aggregate shocks. It develops the view that a large part of aggregate fluctuations arises from idiosyncratic shocks to individual firms. This approach sheds light on a number of issues that are difficult to address in models that postulate aggregate shocks. Although economy-wide shocks (inflation, wars, policy shocks) are no doubt important, they have difficulty explaining most fluctuations (Cochrane (1994)). Often, the explanation for year-to-year jumps of aggregate quantities is elusive. On the other hand, there is a large amount of anecdotal evidence of the importance of idiosyncratic shocks. For instance, the Organization for Economic Cooperation and Development (OECD (2004)) analyzed that, in 2000, Nokia contributed 1.6 percentage points of Finland’s gross domestic product (GDP) growth.2 Likewise, shocks to GDP may stem from a variety of events, such as successful 1 For excellent research assistance, I thank Francesco Franco, Jinsook Kim, Farzad Saidi, Heiwai Tang, Ding Wu, and, particularly, Alex Chinco and Fernando Duarte. For helpful comments, I thank the co-editor, four referees, and seminar participants at Berkeley, Boston University, Brown, Columbia, ECARES, the Federal Reserve Bank of Minneapolis, Harvard, Michigan, MIT, New York University, NBER, Princeton, Toulouse, U.C. Santa Barbara, Yale, the Econometric Society, the Stanford Institute for Theoretical Economics, and Kenneth Arrow, Robert Barsky, Susanto Basu, Roland Bénabou, Olivier Blanchard, Ricardo Caballero, David Canning, Andrew Caplin, Thomas Chaney, V. V. Chari, Larry Christiano, Diego Comin, Don Davis, Bill Dupor, Steve Durlauf, Alex Edmans, Martin Eichenbaum, Eduardo Engel, John Fernald, Jesus Fernandez-Villaverde, Richard Frankel, Mark Gertler, Robert Hall, John Haltiwanger, Chad Jones, Boyan Jovanovic, Finn Kydland, David Laibson, Arnaud Manas, Ellen McGrattan, Todd Mitton, Thomas Philippon, Robert Solow, Peter Temin, Jose Tessada, and David Weinstein. I thank for NSF (Grant DMS-0938185) for support. 2 The example of Nokia is extreme but may be useful. In 2003, worldwide sales of Nokia were $37 billion, representing 26% of Finland’s GDP of $142 billion. This is not sufficient for a proper
© 2011 The Econometric Society
DOI: 10.3982/ECTA8769
734
XAVIER GABAIX
FIGURE 1.—Sum of the sales of the top 50 and 100 non-oil firms in Compustat, as a fraction of GDP. Hulten’s theorem (Appendix B) motivates the use of sales rather than value added.
innovations by Walmart, the difficulties of a Japanese bank, new exports by Boeing, and a strike at General Motors.3 Since modern economies are dominated by large firms, idiosyncratic shocks to these firms can lead to nontrivial aggregate shocks. For instance, in Korea, the top two firms (Samsung and Hyundai) together account for 35% of exports, and the sales of those two firms account for 22% of Korean GDP (di Giovanni and Levchenko (2009)). In Japan, the top 10 firms account for 35% of exports (Canals, Gabaix, Vilarrubia, and Weinstein (2007)). For the United States, Figure 1 reports the total sales of the top 50 and 100 firms as a fraction of GDP. On average, the sales of the top 50 firms are 24% of GDP, while the sales of the top 100 firms are 29% of GDP. The top 100 firms hence represent a large part of the macroeconomic activity, so understanding their actions offers good insight into the aggregate economy. In this view, many economic fluctuations are not, primitively, due to small diffuse shocks that directly affect every firm. Instead, many economic fluctuations are attributable to the incompressible “grains” of economic activity, the assessment of Nokia’s importance, but gives some order of magnitude, as the Finnish base of Nokia is an important residual claimant of the fluctuations of Nokia International. 3 Other aggregates are affected as well. For instance, in December 2004, a $24 billion one-time Microsoft dividend boosted growth in personal income from 0.6% to 3.7% (Bureau of Economic Analysis, January 31, 2005). A macroeconomist would find it difficult to explain this jump in personal income without examining individual firm behavior.
GRANULAR ORIGINS OF AGGREGATE FLUCTUATIONS
735
large firms. I call this view the “granular” hypothesis. In the granular view, idiosyncratic shocks to large firms have the potential to generate nontrivial aggregate shocks that affect GDP, and via general equilibrium, all firms. The granular hypothesis offers a microfoundation for the aggregate shocks of real business cycle models (Kydland and Prescott (1982)). Hence, real business cycle shocks are not, at heart, mysterious “aggregate productivity shocks” or “a measure of our ignorance” (Abramovitz (1956)). Instead, they are well defined shocks to individual firms. The granular hypothesis sheds light on a number of other issues, such as the dependence of the amplitude of GDP fluctuations on GDP level, the microeconomic composition of GDP, and the distribution of GDP and firm-level fluctuations. In most of this paper, the standard deviation of the percentage growth rate of a firm is assumed to be independent of its size.4 This explains why individual firms can matter in the aggregate. If Walmart doubles its number of supermarkets and thus its size, its variance is not divided by 2—as would be the case if Walmart were the amalgamation of many independent supermarkets. Instead, the newly acquired supermarkets inherit the Walmart shocks, and the total percentage variance of Walmart does not change. This paper conceptualizes these shocks as productivity growth, but the analysis holds for other shocks.5 The √ main argument is summarized as follows. First, it is critical to show that 1/ N diversification does not occur in an economy with a fat-tailed distribution of firms. A simple diversification argument shows that, in an economy with N firms with independent shocks, aggregate fluctuations should have a √ size proportional to 1/ N. Given that modern economies can have millions of firms, this suggests that idiosyncratic fluctuations will have a negligible aggregate effect. This paper points out that when firm size is power-law distributed, the conditions under which one derives the central limit theorem break down and other mathematics apply (see Appendix A). In the central case √ of N. Zipf’s law, aggregate volatility decays according to 1/ ln N, rather than 1/ √ The strong 1/ N diversification is replaced by a much milder one that decays according to 1/ ln N. In an economy with a fat-tailed distribution of firms, diversification effects due to country size are quite small. Having established that idiosyncratic shocks do not die out in the aggregate, I show that they are of the correct order of magnitude to explain business cycles. We will see that if firm i has a productivity shock dπi , these shocks 4 The benchmark that the variance of the percentage growth rate is approximately independent of size (“Gibrat’s law” for variances) appears to hold to a good first degree; see Section 2.5. 5 The productivity shocks can come from a decision of the firm’s research department, of the firm’s chief executive officer, of how to process shipments, inventories, or which new line of products to try. They can also stem from changes in capacity utilization, and, particularly, strikes. Suppose a firm, which uses only capital and labor, is on strike for half the year. For many purposes, its effective productivity that year is halved. This paper does not require the productivity shocks to arise from any particular source.
736
XAVIER GABAIX
are independent and identically distributed (i.i.d.) and there is no amplification mechanism, then the standard deviation of total factor productivity (TFP) growth is σTFP = σπ h, where σπ is the standard deviation of the i.i.d. productivity shocks and h is the sales herfindahl of the economy. Using the estimate of annual productivity volatility of σπ = 12% and the sales herfindahl of h = 53% for the United States in 2008, one predicts a TFP volatility equal to σTFP = 12% · 53% = 063%. Standard amplification mechanisms generate the order of magnitude of business cycle fluctuations, σGDP = 17%. Non-U.S. data lead to even larger business cycle fluctuations. I conclude that idiosyncratic granular volatility seems quantitatively large enough to matter at the macroeconomic level. Section 3 then investigates accordingly the proportion of aggregate shocks that can be accounted for by idiosyncratic fluctuations. I construct the “granular residual” Γt , which is a parsimonious measure of the shocks to the top 100 firms: Γt :=
K salesit−1 i=1
GDPt−1
(git − gt )
where git − gt is a simple measure of the idiosyncratic shock to firm i. Regressing the growth rate of GDP on the granular residual yields an R2 of roughly one-third. Prima facie, this means that idiosyncratic shocks to the top 100 firms in the United States can explain one-third of the fluctuations of GDP. More sophisticated controls for common shocks confirm this finding. In addition, the granular residual turns out to be a useful novel predictor of GDP growth which complements existing predictors. This supports the view that thinking about firm-level shocks can improve our understanding of GDP movements. Previous economists have proposed mechanisms that generate macroeconomic shocks from purely microeconomic causes. A pioneering paper is by Jovanovic (1987), whose models generate √nonvanishing aggregate fluctuations root of the number owing to a multiplier proportional to N, the square √ of firms. However, Jovanovic’s theoretical multiplier of N 1000 is much larger than is empirically plausible.6 Nonetheless, Jovanovic’s model spawned a lively intellectual quest. Durlauf (1993) generated macroeconomic uncertainty with idiosyncratic shocks and local interactions between firms. The drivers of his results are the nonlinear interactions between firms, while in this paper it is the skewed distribution of firms. Bak, Chen, Scheinkman, and Woodford (1993) applied the physical theory of self-organizing criticality. While there is much to learn from their approach, it generates fluctuations more fat-tailed than in reality, with infinite means. Nirei (2006) proposed a model where aggregate fluctuations arise from (s S) rules at the firm level, in the spirit of Bak 6 If the actual multiplier were so large, the impact of trade shocks, for instance, would be much higher than we observe.
GRANULAR ORIGINS OF AGGREGATE FLUCTUATIONS
737
et al. (1993). These models are conceptually innovative, but they are hard to work with theoretically and empirically. The mechanism proposed in this paper is tractable and relies on readily observable quantities. Long and Plosser (1983) suggested that sectoral (rather than firm) shocks might account for GDP fluctuations. As their model has a small number of sectors, those shocks can be viewed as miniaggregate shocks. Horvath (2000), as well as Conley and Dupor (2003), explored this hypothesis further. They found that sector-specific shocks are an important source of aggregate volatility. Finally, Horvath (1998) and Dupor (1999) debated √ whether N sectors can have a volatility that does not decay according to 1/ N. I found an alternative solution to their debate, which is formalized in Proposition 2. My approach relies on those earlier contributions and clarifies that the fat-tailed nature of the sectoral shocks is important theoretically, as it determines whether the central limit theorem applies. Studies disagree somewhat on the relative importance of sector-specific shocks, aggregate shocks, and complementarities. Caballero, Engel, and Haltiwanger (1997) found that aggregate shocks are important, while Horvath (1998) concluded that sector-specific shocks go a long way toward explaining aggregate disturbances. Many of these effects in this paper could be expressed in terms of sectors. Granular effects are likely to be even stronger outside the United States, as the United States is more diversified than most other countries. One number reported in the literature is the value of the assets controlled by the richest 10 families, divided by GDP. Claessens, Djankov, and Lang (2000) found a number equal to 38% in Asia, including 84% of GDP in Hong Kong, 76% in Malaysia, and 39% in Thailand. Faccio and Lang (2002) also found that the top 10 families control 21% of listed assets in their sample of European firms. It would be interesting to transpose the present analysis to those countries and to entities other than firms—for instance, business groups or sectors. This paper is organized as follows. Section 2 develops a simple model. It also provides a calibration that indicates that the effects are of the right order of magnitude to account for macroeconomic fluctuations. Section 3 shows directly that the idiosyncratic movements of firms appear to explain, year by year, about one-third of actual fluctuations in GDP, and also contains a narrative of the granular residual and GDP. Section 4 concludes. 2. THE CORE IDEA 2.1. A Simple “Islands” Economy This section uses a concise model to illustrate the idea. I consider an islands economy with N firms. Production is exogenous, like in an endowment econ-
738
XAVIER GABAIX
omy, and there are no linkages between firms (those will be added later). Firm i produces a quantity Sit of the consumption good. It experiences a growth rate (1)
Sit+1 Sit+1 − Sit = = σi εit+1 Sit Sit
where σi is firm i’s volatility and εit+1 are uncorrelated random variables with mean 0 and variance 1. Firm i produces a homogeneous good without any factor input. Total GDP is (2)
Yt =
N
Sit
i=1
and GDP growth is N N Yt+1 Sit 1 = Sit+1 = σi εit+1 Yt Yt i=1 Yt i=1
As the shocks εit+1 are uncorrelated, the standard deviation of GDP growth is Y )1/2 : σGDP = (var Yt+1 t (3)
σGDP =
N i=1
Sit σi2 · Yt
2 1/2
2 Hence, the variance of GDP, σGDP , is the weighted sum of the variance σi2 of idiosyncratic shocks with weights equal to ( SYitt )2 , the squared share of output that firm i accounts for. If the firms all have the same volatility σi = σ, we obtain
(4)
σGDP = σh
where h is the square root of the sales herfindahl of the economy: (5)
h=
2 1/2 N Sit i=1
Yt
For simplicity, h will be referred to as the herfindahl of the economy. This paper works first with the basic model (1)–(2). The arguments apply if general equilibrium mechanisms are added.
GRANULAR ORIGINS OF AGGREGATE FLUCTUATIONS
739
√ 2.2. The 1/ N Argument for the Irrelevance of Idiosyncratic Shocks Macroeconomists often appeal to aggregate (or at least sectorwide) shocks, since idiosyncratic fluctuations disappear in the aggregate if there is a large number of firms N. Consider firms of initially identical size equal to 1/N of GDP and identical standard deviation σi = σ. Then (4)–(5) gives: σ σGDP = √ N To estimate the order of magnitude of the cumulative effect of idiosyncratic shocks, take an estimate of firm volatility σ = 12% from Section 2.4 and consider an economy with N = 106 firms.7 Then σ 12% σGDP = √ = = 0012% per year. 103 N Such a GDP volatility of 0012% is much too small to account for the empirically measured size of macroeconomic fluctuations of around 1%. This is why economists typically√ appeal to aggregate shocks. More general modelling assumptions predict a 1/ N scaling, as shown by the next proposition. PROPOSITION 1: Consider an islands economy with N firms whose sizes are drawn from a distribution with finite variance. Suppose that they all have the same volatility σ. Then the economy’s GDP volatility follows, as N → ∞ (6)
σGDP ∼
E[S 2 ]1/2 σ √ E[S] N (N −1
N
S 2 )1/2
PROOF: Since σGDP = σh, I examine h: N 1/2 h = N −1 i=1N iS . The law of i N i=1 as as N large numbers ensures that N −1 i=1 Si2 → E[S 2 ] and N −1 i=1 Si → E[S]. This as yields N 1/2 h → E[S 2 ]1/2 /E[S]. Q.E.D. Proposition 1 will be contrasted with Proposition 2 below, which shows that different models of the size distribution of firms lead to dramatically different results. √ 2.3. The Failure of the 1/ N Argument When the Firm Size Distribution Is Power Law The firm size distribution, however, is not thin-tailed, as assumed in Proposition 1. Indeed, Axtell (2001), using Census data, found a power law with exponent ζ = 1059 ± 0054. Hence, the size distribution of U.S. firms is well 7
Axtell (2001) reported that in 1997 there were 5.5 million firms in the United States.
740
XAVIER GABAIX
approximated by the power law with exponent ζ = 1, the “Zipf” distribution (Zipf (1949)). This finding holds internationally, and the origins of this distribution are becoming better understood (see Gabaix (2009)). The next proposition examines behavior under a “fat-tailed” distribution of firms. PROPOSITION 2: Consider a series of island economies indexed by N ≥ 1. Economy N has N firms whose growth rate volatility is σ and whose sizes S1 SN are drawn from a power law distribution (7)
P(S > x) = ax−ζ
for x > a1/ζ , with exponent ζ ≥ 1. Then, as N → ∞, GDP volatility follows (8) (9) (10)
vζ σ for ζ = 1 ln N vζ σGDP ∼ 1−1/ζ σ for 1 < ζ < 2 N vζ σGDP ∼ 1/2 σ for ζ ≥ 2 N σGDP ∼
where vζ is a random variable. The distribution of vζ does not depend on N and σ. When ζ ≤ 2, vζ is the square root of a stable Lévy distribution with exponent ζ/2. when ζ = 1 (Zipf ’s law), When ζ > 2, vζ is simply a constant. In other terms, √ GDP volatility decays like 1/ ln N rather than 1/ N. v
ζ In the above proposition, an expression like σGDP ∼ N 1−1/ζ σ means σGDP × 1−1/ζ N converges to vζ σ in distribution. More formally, for a series of random d variables XN and of positive numbers aN , XN ∼ aN Y means that XN /aN → Y
d
as N → ∞, where → is the convergence in distribution. I comment on the economics of Proposition 2 before proving it. The firm size distribution has thin tails, that is, finite variance, if and only if ζ > 2. Proposition 1 states that √ if the firm size distribution has thin tails, then σGDP decays according to 1/ N. In contrast, Proposition 2 states that if the firm size√distribution has fat tails (ζ < 2), then σGDP decays much more slowly than 1/ N: it decays as 1/N 1−1/ζ . To get the intuition for the scaling, take the case a = 1 and observe that (7) implies that “typical” size S1 of the largest firm is such that S1−ζ = 1/N, hence S1 = N 1/ζ (see Sornette (2006) for that type of intuition). In contrast, GDP is Y NE[S] when ζ > 1 by the law of large numbers. Hence, the share of the largest firm is S1 /Y = N −(1−1/ζ) /E[S] ∝ N −(1−1/ζ) :8 this is a small decay when 8 Here f (Y ) ∝ g(Y ) for some functions f g means that the ratio f (Y )/g(Y ) tends, for large Y , to be a positive real number. So f and g have the same scaling “up to a constant factor.”
GRANULAR ORIGINS OF AGGREGATE FLUCTUATIONS
741
ζ is close to 1. Likewise, the size of the top k firms satisfies Sk−ζ = k/N, so Sk = (N/k)1/ζ . Hence, the share of the largest K firms (for a fixed K) is proportional to N −(1−1/ζ) . Plugging this into (5), we see that the herfindahl, and GDP volatility, is proportional to N −(1−1/ζ) . In the case ζ = 1, E[S] = ∞, so GDP cannot be Y NE[S]. The following heuristic reasoning gives the correct value. As firm size density is x−2 and we saw that the largest firm has typical size N, the typical average firm size is S N =
N −2 x x dx = ln N, and then Y NS N = N ln N. Hence, the share of the top 1 firm is S1 /Y = 1/ ln N. By the above reasoning, GDP volatility is proportional to 1/ ln N. The perspective of Proposition 2 is that of an economist who knows the GDP of various countries, but not the size of their respective firms, except that, for instance, they follow Zipf’s law. Then he would conclude that the volatility of a country of size N should be proportional to 1/ ln N. This explains the vζ terms in the distribution of σGDP : when ζ < 2, GDP volatility (and the herfindahl h) depends on the specific realization of the size distribution of top firms. Because of the fat-tailedness of the distribution of firms, σGDP does not have a degenerate distribution even as N → ∞. For the same reason, when ζ > 2, the law of large numbers applies and the distribution of volatility does become degenerate. Of course, if the economist knows the actual size of the firms, then she could calculate the standard deviation of GDP directly by calculating the herfindahl index. Note also that as GDP is made of some large firms, GDP fluctuations are typically not Gaussian (mathematically, the Lindeberg–Feller theorem does not apply, because there are some large firms). The ex ante distribution is developed further in Proposition 3. Having made these remarks about the meaning of Proposition 2, let me present its proof. PROOF OF PROPOSITION 2: Since σGDP = σh, I examine N (11)
−1
N
1/2 2 i
S
i=1
h= N
−1
N
Si
i=1
I observe that when ζ > 1, the law of large numbers gives
(12)
N −1
N i=1
Si → E[S]
742
XAVIER GABAIX
almost surely, so N −1
N
1/2 Si2
i=1
h∼
E[S]
I will first complete the above heuristic proof for the scaling as a function N, which will be useful to ground the intuition, and then present a formal proof which relies on the heavier machinery of Lévy’s theorem. Heuristic Proof. For simplicity, I normalize a = 1. I observe that the size of the ith largest firm is approximately
(13)
SiN
i = N
−1/ζ
The reason for (13) is the following. As the counter-cumulative distribution function (CDF) of the distribution is x−ζ , the random variable S −ζ follows a uniform distribution. Hence, the size of firm number i out of N follows −ζ −ζ E[SiN ] = i/(N + 1). So in a heuristic sense, we have SiN i/(N + 1) or, more simply, (13). From representation (13), the herfindahl can be calculated as N −1+1/ζ
N
1/2 i−2/ζ
i=1
hN ∼
E[S]
In the fat-tailed case, ζ < 2, the series N −1+1/ζ hN ∼
∞ i=1
E[S]
∞ i=1
i−2/ζ converges, hence
1/2 i−2/ζ = CN −1+1/ζ
for a constant C. Volatility scales as N −1+1/ζ , as in (9). ∞ In contrast, in the finite-variance case, the series i=1 i−2/ζ diverges and we
N N have i=1 i−2/ζ ∼ 1 i−2/ζ di ∼ N 1−2/ζ /(1 − 2/ζ), so that hN ∼
N −1+1/ζ (N 1−2/ζ /(1 − 2/ζ))1/2 = C N −1/2 E[S]
and as expected volatility scales as N −1/2 .
GRANULAR ORIGINS OF AGGREGATE FLUCTUATIONS
743
Rigorous Proof. When ζ > 2, the variance of firm sizes is finite and I use Proposition 1. When ζ ≤ 2, I observe that Si2 has power-law exponent ζ/2 ≤ 1, as shown by −ζ = ax−ζ/2 P(S 2 > x) = P S > x1/2 = a x1/2 So to handle the numerator of (11), I use Lévy’s theorem from Appendix A. This implies N
−2/ζ
N
d
Si2 → u
i=1
where u is a Lévy-distributed random variable with exponent ζ/2. So when ζ ∈ (1 2], I can use the fact (12) to conclude 1/2 N N −2/ζ Si2 1/2 d u i=1 → N 1−1/ζ h = N E[S] N −1 Si i=1
When ζ = 1, additional care is required, because E[S] = ∞. Lévy’s theorem applied to Xi = Si gives aN = N and bN = N ln N, hence N 1 d Si − N ln N → g N i=1 where g follows a Lévy distribution with exponent 1, which implies (14)
Y=
N
Si ∼ N ln N
i=1
I conclude h ∼ u1/2 / ln N.
Q.E.D.
I conclude with a few remarks. Proposition 2 offers a resolution to the debate between Horvath (1998, 2000) and Dupor (1999). Horvath submited evidence that sectoral shocks may be enough to generate aggregate fluctuations. Dupor (1999) debated this on theoretical grounds and claimed that Horvath was able to generate large aggregate fluctuations only because he used a moderate number of sectors (N = 36). If he had many more finely disaggregated sectors √ (e.g., 100 times as many), then aggregate volatility would decrease in 1/ N (e.g., 10 times smaller). Proposition 2 illustrates that both viewpoints are correct, but apply in different settings. Dupor’s reasoning holds only in a world
744
XAVIER GABAIX
of small firms, when the central limit theorem can apply. Horvath’s empirical world is one where the size distribution of firms is sufficiently fat-tailed that the central limit theorem does not apply. Instead, Proposition 2 applies and GDP volatility remains substantial even if the number N of subunits is large. Though the benchmark case of Zipf’s law is empirically relevant, and theoretically clean and appealing, many arguments in this paper do not depend on it. The results only require that the herfindahl of actual economies is sufficiently large. For instance, if the distribution of firm sizes were lognormal with a sufficiently high variance, then quantitatively very little would change. The herfindahls generated by a Zipf distribution are reasonably high. For √ N = 106 firms, with an equal distribution of sizes, h = 1/ N = 01% but in a Zipf world with ζ = 1, Monte Carlo simulations show that the median h = 12%. With a firm volatility of σ = 12%, this corresponds to a GDP volatility σh of 0.012% for identically sized firms and a more respectable 1.4% for a Zipf distribution of firm sizes. This is the theory under the Zipf benchmark, which has a claim to hold across countries and clarifies what we can expect independently of the imperfections of data sets and data collection. 2.4. Can Granular Effects Be Large Enough in Practice? A Calibration I now examine how large we can expect granular effects to be. For greater realism, I incorporate two extra features compared to the island economy: input– output linkages and the endogenous response in inputs to initial disturbances. I start with the impact of linkages. 2.4.1. Economies With Linkages Consider an economy with N competitive firms buying intermediary inputs from one another. Let firm i have Hicks-neutral productivity growth dπi . Hulten (1978) showed that the increase in aggregate TFP is9 (15)
dTFP sales of firm i = dπi TFP GDP i
This formula shows that, somewhat surprisingly, we can calculate TFP shocks without knowing the input–output matrix: the sufficient statistic for the impact of firm i is its size, as measured by its sales (i.e., gross output rather than net output). This helps simplify the analysis.10 In addition, the weights add up to more than 1. This reflects the fact that productivity growth of 1% in a firm 9
For completeness, Appendix B rederives and generalizes Hulten’s theorem. However, to study the propagation of shocks and the origin of size, the input–output matrix can be very useful. See Carvalho (2009) and Acemoglu, Ozdaglar, and Tahbaz-Salehi (2010), who studied granular effects in the economy viewed as a network. 10
GRANULAR ORIGINS OF AGGREGATE FLUCTUATIONS
745
generates an increase in produced values equal to 1% times its sales, not times its sales net of inputs (which would be the value added). The firm’s sales are the proper statistic for that social value. I now draw the implications for TFP volatility. Suppose productivity shocks dπi are uncorrelated with variance σπ2 . Then the variance of productivity growth is 2 dTFP sales of firm i = (16) var(dπi ) var TFP GDP i and so the volatility of the growth of TFP is (17)
σTFP = hσπ
where h is the sales herfindahl, 1/2 N salesit 2 (18) h= GDPt i=1 I now examine the empirical magnitude of the key terms in (17), starting with σπ . 2.4.2. Large Firms Are Very Volatile Most estimates of plant-level volatility find very large volatilities of sales and employment, with an order of magnitude σ = 30–50% per year (e.g., Caballero, Engel, and Haltiwanger (1997), Davis, Haltiwanger, and Schuh (1996)). Also, the volatility of firm size in Compustat is a very large, 40% per year (Comin and Mullani (2006)). Here I focus the analysis on the top 100 firms. Measuring firm volatility is difficult, because various frictions and identifying assumptions provide conflicting predictions about links between changes in total factor productivity and changes in observable quantities such as sales and employment. I consider the volatility of three measures of growth rates: ln(salesit /employeesit ), ln salesit , and ln employeesit . For each measure and each year, I calculate the cross-sectional variance among the top 100 firms of the previous year and take the average.11 I find standard deviations of 12%, 12%, and 14% for, respectively, growth rates of the sales per employee, of sales, and of employees. Also, among the top 100 firms, the sample correlations are 0.023, 0.073, and 0.033, respectively, for each of the three measures.12 In other terms, for each year t, I calculate the cross-sectional variance of growth rates, σt2 = K 2 K −1 2 K i=1 git − (K i=1 git ) , with K = 100. The corresponding average standard deviation is T −1 2 1/2 [T . t=1 σt ] 2 1 1 12 For each year, we measure the sample correlation ρt = [ K(K−1) i =j git gjt ]/[ K i git ], with K = 100. The correlations are positive. Note that a view that would attribute the major firm-level 11
−1
746
XAVIER GABAIX
Hence, the correlation between growth rates is small. At the firm level, most variation is idiosyncratic. In conclusion, the top 100 firms have a volatility of 12% based on sales per employee. In what follows, I use σπ = 12% per year for firm-level volatility as the baseline estimate. 2.4.3. Herfindahls and Induced Volatility I next consider the impact of endogenous factor usage on GDP. Calling Λ TFP, many models predict that when there are no other disturbances, GDP growth dY/Y is proportional to TFP growth dΛ/Λ: dY/Y = μ dΛ/Λ for some μ ≥ 1 that reflects factor usage; alternatively, via (15), (19)
sales of firm i dY =μ dπi Y Y i
This gives a volatility of GDP equal to σGDP = μσTFP , and via (17), (20)
σGDP = μσπ h
To examine the size of μ, I consider a few benchmarks. In a short-term model where capital is fixed in the short run and the Frisch elasticity of labor supply is φ, μ = 1/(1 − αφ/(1 + φ)), and if the supply of capital is flexible (e.g., via variable utilization or the current account), then μ = (1 + φ)/α.13 With an effective Frisch elasticity of 2 (as recommend by Hall (2009) for an inclusive elasticity that includes movements in and out of the labor force), those values are μ = 18 and μ = 45. If TFP is a geometrical random walk, in the neoclassical growth model where only capital can be accumulated, in the long run, we have μ = 1/α, where α is the labor share; with α = 2/3, this gives μ = 15.14 I use the average of the three above values, μ = 26. Empirically, the sales herfindahl h is quite large: h = 53% for the United States in 2008 and h = 22% in an average over all countries.15 This means, parenthetically, that the United States is a country with relatively small firms (compared to GDP), where the granular hypothesis might be the hardest to establish. movements to shocks to the relative demand for a firm’s product compared to its competitors would counterfactually predict a negative correlation. 13 This can be seen by solving maxL ΛK 1−α Lα − L1+1/φ or maxKL ΛK 1−α Lα − rK − L1+1/φ , respectively, which gives Y ∝ Λμ for the announced value of μ. For this derivation, I use the local representation with a quasilinear utility function, but the result does not depend on that. 14 If Yt = Λt Kt1−α Lα , Λt ∝ eγt , and capital is accumulated, then in a balanced growth path, 1/α Yt ∝ Kt ∝ Λt . This holds also with stochastic growth. 15 The U.S. data are from Compustat. The international herfindahls are from Acemoglu, Johnson, and Mitton (2009). They analyzed the Dun and Bradstreet data set, which has a good coverage of the major firms in many countries, though not a complete or homogeneous one.
GRANULAR ORIGINS OF AGGREGATE FLUCTUATIONS
747
I can now incorporate all those numbers, using σπ = 12% seen above. Equation 20 yields a GDP volatility σGDP = 26 × 12% × 53% = 17% for the United States, and σGDP = 26 × 12% × 22% = 68% for a typical country. This is very much on the order of magnitude of GDP fluctuations. As always, further amplification mechanisms can increase the estimate. I conclude that idiosyncratic volatility seems quantitatively large enough to matter at the macroeconomic level. 2.5. Extension: GDP Volatility When the Volatility of a Firm Depends on Its Size I now study the case where the volatility of a firm’s percentage growth rate decreases with firm size, which will confirm the robustness of the previous results and yield additional predictions. I examine the functional form σ firm (S) = kS −α from (21). If α > 0, then large firms have a smaller standard deviation than small firms. Stanley, Amaral, Buldyrev, Havlin, Leschhorn, Maass, Salinger, and Stanley (1996) quantified the relation more precisely and showed that (21) holds for firms in Compustat, with α 1/6. It is unclear whether the conclusions from Compustat can generalize to the whole economy. Compustat only comprises firms traded on the stock market and these are probably more volatile than nontraded firms, as small volatile firms are more prone to seek outside equity financing, while large firms are in any case very likely to be listed in the stock market. This selection bias implies that the value of α measured from Compustat firms alone is presumably larger than in a sample composed of all firms. It is indeed possible α may be 0 when estimated on a sample that includes all firms, as random growth models have long postulated. In any case, any deviations from Gibrat’s law for variances appear to be small, that is, 0 ≤ α ≤ 1/6. If there is no diversification as size increases, then α = 0. If there is full diversification and a firm of size S is composed of S units, then α = 1/2. Empirically, firms are much closer to the Gibrat benchmark of no diversification, α = 0. The next proposition extends Propositions 1 and 2 to the case where firm volatility decreases with firm size. PROPOSITION 3: Consider an islands economy, with N firms that have powerlaw distribution P(S > x) = (Smin /x)ζ for ζ ∈ [1 ∞). Assume that the volatility of a firm of size S is −α S firm σ (S) = σ (21) Smin for some α ≥ 0 and the growth rate is S/S = σ firm (S)u, where E[u] = 0. Define ζ = ζ/(1 − α) and α = min(1 − 1/ζ 1/2), so that α = 1/2 for ζ ≥ 2. GDP fluctuations have the following form. If ζ > 1, (22)
Y
ζ −1
∼ N −α E[|u|ζ ]1/ζ σgζ Y ζ
if ζ < 2
748 (23)
XAVIER GABAIX 2 firm Y (S)2 ]1/2 E[u2 ]1/2
ζ − 1 E[S σ ∼ N −α g2 Y ζ Smin
if ζ ≥ 2
where gζ is a standard Lévy distribution with exponent ζ . Recall that g2 is simply a standard Gaussian distribution. If ζ = 1,
(24)
N −α Y
∼ E[|ε|ζ ]1/ζ σgζ Y ln N
(25)
N −α E[S 2 σ firm (S)2 ]1/2 E[u2 ]1/2 Y ∼ g2 Y ln N Smin
if ζ < 2
if
ζ ≥ 2
In particular, the volatility σ(Y ) of GDP growth decreases as a power-law function of GDP Y , (26)
σ GDP (Y ) ∝ Y −α
To see the intuition for Proposition 3, we apply the case of Zipf’s law (ζ = 1) to an example with two large countries, 1 and 2, in which country 2 has twice as many firms as country 1. Its largest K firms are twice as large as the largest firms of country 1. However, scaling according to (21) implies that their volatility is 2−α times the volatility of firms in country 1. Hence, the volatility of country 2’s GDP is 2−α times the volatility of country 1’s GDP (i.e., (26)). Putting this another way, under the case presented by Proposition 3 and ζ = 1, large firms are less volatile than small firms (equation (21)). The top firms in big countries are larger (in an absolute sense) than top firms in small countries. As the top firms determine a country’s volatility, big countries have less volatile GDP than small countries (equation (26)). Also, one can reinterpret Proposition 3 by interpreting a large firm as a “country” made up of smaller entities. If these entities follow a power-law distribution, then Proposition 3 applies and predicts that the fluctuations of the growth rate ln Sit , once rescaled by Sit−α , follow a Lévy distribution with exponent min{ζ/(1 − α) 2}. Lee, Amaral, Meyer, Canning, and Stanley (1998) plotted this empirical distribution, which looks roughly like a Lévy stable distribution. It could be that the fat-tailed distribution of firm growth comes from the fat-tailed distribution of the subcomponents of a firm.16 A corollary of Proposition 3 may be worth highlighting. COROLLARY 1—Similar Scaling of Firms and Countries: When Zipf ’s law holds (ζ = 1) and α ≤ 1/2, we have α = α, that is, firms and countries should see their volatility scale with a similar exponent: (27)
σ firms (S) ∝ S −α
σ GDP (Y ) ∝ Y −α
16 See Sutton (2002) for a related model, and Wyart and Bouchaud (2003) for a related analysis, which acknowledges the contribution of the present article, which was first circulated in 2001.
GRANULAR ORIGINS OF AGGREGATE FLUCTUATIONS
749
Interestingly, Lee et al. (1998) presented evidence that supports (27), with a small exponent α 1/6 (see also Koren and Tenreyro (2007)). A more systematic investigation of this issue would be interesting. Finally, Proposition 3 adopts the point of view of an economist who would not know the sizes of firms in the country. Then the best guess is a Lévy distribution of GDP fluctuations. However, given precise knowledge of the size of firms, GDP fluctuations will depend on the details of the distribution of the microeconomic shocks ui . Before concluding this theoretical section, let me touch on another very salient feature of business cycles: firms and sectors comove. As seen by Long and Plosser (1983), models with production and demand linkages can generate comovement. Carvalho and Gabaix (2010) worked out such a model with purely idiosyncratic shocks and demand linkages. In that economy, the equilibrium growth rates of sales, employees, and labor productivity can be expressed as (28)
git = aεit + bft
ft ≡
N Sjt−1 j=1
Yt−1
εjt
where εit is the firm idiosyncratic productivity shock. Hence, the economy is a one-factor model, but, crucially, the common factor ft is nothing but a sum of the idiosyncratic firm shocks. In their calibration, over 90% of output variance will be attributed to comovement, as in the empirical findings of Shea (2002). Hence, a calibrated granular model with linkages and only idiosyncratic shocks may account for a realistic amount of comovement. This arguably good feature of granular economies generates econometric challenges, as we shall now see. 3. TENTATIVE EMPIRICAL EVIDENCE FROM THE GRANULAR RESIDUAL 3.1. The Granular Residual: Motivation and Definition This section presents tentative evidence that the idiosyncratic movements of the top 100 firms explain an important fraction (one-third) of the movement of total factor productivity (TFP). The key challenge is to identify idiosyncratic shocks. Large firms could be volatile because of aggregate shocks, rather than the other way around. There is no general solution for this “reflection problem” (Manski (1993)). I use a variety of ways to measure the share of idiosyncratic shocks. I start with a parsimonious proxy for the labor productivity of firm i, the log of its sales per worker: (29)
zit := ln
sales of firm i in year t number of employees of firm i in year t
This measure is selected because it requires only basic data that are more likely to be available for non-U.S. countries, unlike more sophisticated measures
750
XAVIER GABAIX
such as a firm-level Solow residual. Most studies that construct productivity measures from Compustat data use (29). I define the productivity growth rate as git = zit − zit−1 . Various models (including the one in the National Bureau of Economic Research (NBER) working paper version of this article) predict that, indeed, the productivity growth rate is closely related to git . Suppose that productivity evolves as (30)
git = β Xit + εit
where Xit is a vector of factors that may depend on firm characteristics at time t − 1 and on factors at time t (e.g., as in equation (28)). My goal is to investigate whether εit , the idiosyncratic component of the total factor productivity growth rate of large firms, can explain aggregate TFP. More precisely, I would like to empirically approximate the ideal granular residual Γt∗ , which is the direct rewriting of (15): (31)
Γt∗ :=
K Sit−1 i=1
Yt−1
εit
It is the sum of idiosyncratic firm shocks, weighted by size. I wish to see what fraction of the total variance of GDP growth comes from the granular residual, as the theory (19) predicts that GDP growth is gY t = μΓt∗ . I need to extract εit . To do so, I estimate (30) for the top Q ≥ K firms of the previous year, on a vector of observables that I will soon specify. I then form
Xit . the estimate of idiosyncratic firm-level productivity shock as εit = git − β I define the “granular residual” Γt as (32)
Γt :=
K Sit−1 i=1
Yt−1
εit
Identification is achieved if the measured granular residual Γt is close to the ideal granular residual Γt∗ . Two particularizations are useful, because they do not demand much data and are transparent. They turn out to do virtually as well as the more complicated procedures I will also consider. The simplest specification is to control for the mean growth rate in the sample, that is, to have Xit = gt , where Q gt = Q−1 i=1 git . Here, I take the average over the top Q firms. We could have Q = K or take the average over more firms. In practice, I will calculate the granular residual over the top K = 100 firms, but take the averages for the controls over the top Q = 100 or 1000 firms. Then the granular residual is the weighted sum of the firm’s growth rate minus the average firm growth rate: (33)
Γt =
K Sit−1 i=1
Yt−1
(git − gt )
GRANULAR ORIGINS OF AGGREGATE FLUCTUATIONS
751
Another specification is to control for the mean growth gIi t , the equalweighted average productivity growth rate among firms that are in i’s industry and among the top Q firms therein. Then Xit = gIi t . That gives (34)
Γt =
K Sit−1 i=1
Yt−1
git − gIi t
It is the weighted sum of the firm growth rates minus the growth rates of other firms in the same industry. The term git − gIi t may be closer to the ideal εit than git − gt , as gIi t may control better than gt for industry-wide disturbances, for examples, industry-wide real price movements. Before that, I state a result that establishes sufficient conditions for identification. PROPOSITION 4: Suppose that (i) decomposition (30) holds with a vector of ∞ S observables Xit and that (ii) i=1 ( Yit−1 )2 E[|Xit |2 ] < ∞. Then, as the number of t−1 firms becomes large (in K or in Q ≥ K), Γt (K Q) − Γt∗ (K) → 0 almost surely, that is, the empirical granular residual Γt is close to the ideal granular residual Γt∗ . Assumption (i) is the substantial one. Given that in practice I will have Xit made of gt and gIi t , and their interaction with firm size, I effectively assume that the average growth rate of firms and their industries, perhaps interacted with the firm size or such nonlinear transformation of it, span the vector of factors. In other terms, firms within a given industry respond in the same way to common shocks or respond in a way that is related to firm size as in (36) below. This is the case under many models, but they are not fully general. Indeed, without some sort of parametric restriction, there is no solution (Manski (1993)). A typical problematic situation would be the case where the top firm has a high loading on industry factors that is not captured by its size. Then, instead of the large firms affecting the common factor, the factor would affect the large firms. However, I do control for size and the interaction between size and industry, and aggregate effects, so in that sense I can hope to be reasonably safe.17 Assumption (ii) is simply technical and is easily verified. For instance, it is verified if E[Xit2 ] is finite and the herfindahl is bounded. Formally, the herfindahl (which, as we have seen, is small anyway) is bounded if the total sales to out17 The above reflects my best attempt with Compustat data. Suppose one had continuous-time firm-level data and could measure the beginning of a strike, the launch of a new product, or the sales of a big export contract. These events would be firm-level shocks. It would presumably take some time to reverberate in the rest of the economy. Hence, a more precise understanding would be achieved. Perhaps future data (e.g., using newspapers to approximate continuous-time information) will be able to systematically achieve this extra measure of identification via the time series.
752
XAVIER GABAIX
∞ S ∞ S put ratio is bounded by some amount B, as i=1 ( Yit−1 )2 ≤ ( i=1 Yit−1 )2 ≤ B2 . t−1 t−1 Note that here we do not need to assume a finite number of firms, and that in practice B 2 (Jorgensen, Gollop, and Fraumeni (1987)). To complete the econometric discussion, let me also mention a small sample bias: The R2 measured by a regression will be lower than the true R2 , because the control by gt effectively creates an error in variables problem. This effect, which can be rather large (and biases the results against the granular hypothesis), is detailed in the Supplemental Material (Gabaix (2011)). I would like to conclude with a simple economic example that illustrates the basic granular residual (equation (33)).18 Suppose that the economy is made of one big firm which produces half of output, and a million other very small firms, and that I have good data on 100 firms: the big firm and the top 99 largest of the very small firms. The standard deviation of all growth rates is 10%, and growth rates are given by git = Xt + εit , where Xt is a common shock. Suppose that, in a given year, GDP increases by 3% and that the big firm has growth of, say, 6%, while the average of the small ones is close to 0%. What can we infer about the origins of shocks? If one thinks of all this being generated by an aggregate shock of 3%, then the distribution of implied idiosyncratic shocks is 3% for the big firm and −3% on average for all small ones. The probability that the average of the i.i.d. small firms is −3%, given the law of large numbers for these firms, is very small. Hence, it is more likely that the average shock Xt is around 0%, and the economy-wide growth of 3% comes from an idiosyncratic shock to the large firm equal to 6%. The estimate of the aggregate shock is captured by gt , which is close to 0%, and the estimate of the contribution of idiosyncratic shocks is captured by the granular residual, Γ = 3%. 3.2. Empirical Implementation 3.2.1. Basic Specification I use annual U.S. Compustat data from 1951 to 2008. For the granular residual, I take the K = 100 largest firms in Compustat according to the previous year’s sales that have valid sales and employee data for both the current and previous years and that are not in the oil, energy, or finance sectors.19 Industries are three-digit Standard Industrial Classification (SIC) codes. Compustat contains some large outliers, which may result from extraordinary events, 18
I thank Olivier Blanchard for this example. For firms in the oil/energy sector, the wild swings in worldwide energy prices make (29) too poor a proxy of total factor productivity. Likewise, the “sales” of financial firms do not mesh well with the meaning (“gross output”) used in the present paper; this exclusion has little impact, though is theoretically cleaner. 19
GRANULAR ORIGINS OF AGGREGATE FLUCTUATIONS
753
TABLE I EXPLANATORY POWER OF THE GRANULAR RESIDUALa GDP Growtht
Solowt
(Intercept)
0.018** (0.0026)
0.017** (0.0025)
0.011** (0.002)
0.01** (0.0021)
Γt
1.8* (0.69)
2.5** (0.69)
2.1** (0.54)
2.3** (0.57)
Γt−1
2.6** (0.71)
2.9** (0.67)
1.2* (0.55)
1.3* (0.56)
Γt−2 N R2 Adj. R2
2.1** (0.71) 56 0.266 0.239
55 0.382 0.346
0.65 (0.59) 56 0.261 0.233
55 0.281 0.239
a For the year t = 1952 to 2008, per capita GDP growth and the Solow residual are regressed on the granular residual Γt of the top 100 firms (equation (33)). The firms are the largest by sales of the previous year. Standard errors are given in parentheses.
such as a merger. To handle these outliers, I winsorize the extreme demeaned growth rates at 20%.20 Table I presents regressions of GDP growth and the Solow residual on the simplest granular residual (33). These regressions are supportive of the granular hypothesis. The R2 ’s are reasonably high, at 346% for the GDP growth and around 239% for the Solow residual when using two lags. We will soon see that the industry-demeaned granular residual does even better. If only aggregate shocks were important, then the R2 of the regressions in Table I would be zero. Hence, the good explanatory power of the granular residual is inconsistent with a representative firm framework. It is also inconsistent with the hypothesis that most firm-level volatility might be due to a zero-sum redistribution of market shares. Let us now examine the results if we incorporate a more fine-grained control for industry shocks. 3.2.2. Controlling for Industry Shocks I next control for industry shocks, that is, use specification (34). Table II presents the results, which are consistent with those in Table I. The adjusted For instance, I construct (32) by winsorizing εit at M = 20%, that is by replacing it by T ( εit ), where T (x) = x if |x| ≤ M, and T (x) = sign(x)M if |x| > M. I use M = 20%, but results are not materially sensitive to the choice of that threshold. 20
754
XAVIER GABAIX TABLE II EXPLANATORY POWER OF THE GRANULAR RESIDUAL WITH INDUSTRY DEMEANINGa GDP Growtht
Solowt
(Intercept)
0.019** (0.0024)
0.017** (0.0022)
0.011** (0.0019)
0.011** (0.0019)
Γt
3.4** (0.86)
4.5** (0.82)
3.3** (0.68)
3.7** (0.72)
Γt−1
3.4** (0.82)
4.3** (0.78)
1.5* (0.65)
1.9** (0.68)
Γt−2 N R2 Adj. R2
2.7** (0.79) 56 0.356 0.332
55 0.506 0.477
0.77 (0.69) 56 0.334 0.309
55 0.372 0.335
a For the year t = 1952 to 2008, per capita GDP growth and the Solow residual are regressed on the granular residual Γt of the top 100 firms (equation (34)), removing the industry mean within this top 100. The firms are the largest by sales of the previous year. Standard errors are given in parentheses.
R2 ’s are a bit higher: about 477% for GDP growth and 335% for the Solow residual when using two lags.21 This table reinforces the conclusion that idiosyncratic movements of the top 100 firms seem to explain a large fraction (about one-third, depending on the specification) of GDP fluctuations. In addition, industry controls, which may be preferable to a single aggregate control on a priori grounds, slightly strengthen the explanatory power of the granular residual. In terms of economics, Tables I and II indicate that the lagged granular residual helps explain GDP growth, and that the same-year “multiplier” μ is around 3. 3.2.3. Predicting GDP Growth With the Granular Residual The above regressions attempt to explain GDP with the granular residual, that is, relating aggregate movement to contemporary firm-level idiosyncratic movements that may be more easily understood (as we will see in the narrative below). I now study forecasting GDP growth with past variables. In addition to the granular residual, I consider the main traditional predictors. I control for oil and monetary policy shocks by following the work of Hamilton (2003) and Romer and Romer (2004), which are arguably the leading way to control for oil and monetary policy shocks. I also include the 3-month nominal T-bill and the 21 The similarity of the results is not surprising, as the correlation between the simple and industry-demeaned granular residual is 082.
755
GRANULAR ORIGINS OF AGGREGATE FLUCTUATIONS TABLE III PREDICTIVE POWER OF THE GRANULAR RESIDUAL FOR TERM SPREAD, OIL SHOCKS, AND MONETARY SHOCKSa 1
(Intercept)
0.022** (0.0029)
2
0.02** (0.0029)
3
4
5
0.022** (0.0029)
0.026** (0.0057)
0.015 (0.0075)
6
7
0.015 0.019** (0.0079) (0.0027)
8
0.021** (0.0073)
Oilt−1
−0.00027* (0.00012)
−0.00024* (0.00012)
−8.7e−05 (0.00013)
−0.00017 (0.00012)
Oilt−2
−0.00018 (0.00012)
−0.00017 (0.00012)
−6.9e−05 (0.00012)
−0.00012 (0.00011) −0.051 (0.05)
Monetaryt−1
−0.083 (0.057)
−0.08 (0.055)
−0.042 (0.055)
Monetaryt−2
−0.059 (0.057)
−0.038 (0.056)
−0.024 (0.054)
0.043 (0.053)
rt−1
−0.75** (0.2)
−0.6 (0.32)
−0.45 (0.37)
−0.41 (0.34)
rt−2
0.65** (0.19)
0.56 (0.32)
0.43 (0.37)
0.39 (0.34)
Term spreadt−1
0.32 (0.6)
0.38 (0.64)
0.4 (0.58)
Term spreadt−2
0.45 (0.47)
0.27 (0.54)
−0.38 (0.53)
Γt−1
3.5** (0.96)
3.3** (1)
Γt−2
1.2 (0.92)
2.3* (0.97)
55 0.215 0.185
55 0.463 0.341
N R2 Adj. R2
55 0.121 0.0871
55 0.0764 0.0409
55 0.175 0.109
55 0.22 0.191
55 0.288 0.231
55 0.312 0.192
a For the year t = 1952 to 2008, per capita GDP growth is regressed on the lagged values of the granular residual Γt of the top 100 firms (equation (34)), of the Hamilton (for oil) and Romer–Romer (for money) shocks, and the term spread (the government 5-year bond yield minus the 3-month yield). We see that the granular residual has good incremental predictive power even beyond the term spread. Standard errors are given in parentheses.
term spread (which is defined as the 5-year bond rate minus the 3-month bond rate), which is often found to be the a very good predictor of GDP (those two endogenous variables are arguably more “diagnostic” than “causal,” though). Table III presents the results. The granular residual has an adjusted R2 (called R2 ) equal to 185% (column 7). The traditional economic factors—oil and money shocks—have an R2 of 109% (column 3). Past GDP growth has a very small R2 of −03%, a number not reported in Table III to avoid cluttering the table too much. The traditional diagnostic financial factors—the interest rate and the term spread—have an R2 of 231% (column 5). Putting all predictors together, the R2 is 341%
756
XAVIER GABAIX
(column 8) and the granular residual brings an incremental R2 of 149% (compared to column 6). I conclude that the granular residual is a new and apparently useful predictor of GDP. This result suggests that economists might use the granular residual to improve not only the understanding of GDP, but also its forecasting. 3.3. Robustness Checks An objection to the granular residual is that the control for the common factors may be imperfect. Table IV shows the explanatory power of the granular residual, controlling for oil and monetary shocks. The adjusted R2 is 477% for the granular residual (column 4), it is 82% and 23% for oil and monetary shocks, respectively (columns 1 and 2), and 495% for financial variables (interest rates and term spread, column 6). To investigate whether the granular residual does add explanatory power, the last column puts all those variables together (perhaps pushing the believable limit of ordinary least squares (OLS) because of the large number of regressors) and shows that the explanatory variables yield an R2 of 767%. In conclusion, as a matter of “explaining” (in a statistical sense) GDP growth, the granular residual does nearly as well as all traditional factors together, and complements their explanatory power. I report a few robustness checks in the Supplemental Material. For instance, among the explanatory variables of (30), I include not only gt or gIi t , but also their interaction with log firm size and its square. The impact of the control for size is very small. Using a number Q = 1000 of firms yields similar results, too. Finally, I could not regress git on GDP growth at time t because then by construction I would eliminate any explanatory power of εit . I conclude that the granular residual has a good explanatory power for GDP, even controlling for traditional factors. In addition, it has good forecasting power, complementing other factors. Hence, the granular residual must capture interesting firm-level dynamics that are not well captured by traditional aggregate factors. I have done my best to obtain “idiosyncratic” shocks; given that I do not have a clean instrument, the above results should still be considered provisional. The situation is the analogue, with smaller stakes, to that of the Solow residual. Solow understood at the outset that there are very strong assumptions in the construction of his residual, in particular, full capacity utilization and no fixed cost. But a “purified” Solow residual took decades to construct (e.g., Basu, Fernald, and Kimball (2006)), requires much better data, is harder to replicate in other countries, and relies on special assumptions as well. Because of that, the Solow residual still endures, at least as a first pass. In the present paper too, it is good to have a first step in the granular residual, together with caveats that may help future research to construct a better residual. The conclusion of this article contains some other measures of granular residuals that build on the
757
GRANULAR ORIGINS OF AGGREGATE FLUCTUATIONS
TABLE IV EXPLANATORY POWER OF THE GRANULAR RESIDUAL FOR OIL AND MONETARY SHOCKS, AND INTEREST RATESa 1
2
0.02** (0.0029)
3
4
5
0.022** 0.017** 0.019** (0.003) (0.0022) (0.0023)
6
0.016* (0.0065)
7
8
0.02** (0.005)
0.023** (0.0048)
(Intercept)
0.023** (0.003)
Oilt
−9.8e−05 (0.00011)
−8.3e−05 (0.00012)
−4.6e−05 (8.6e−05)
−7.9e−05 (7.5e−05)
Oilt−1
−0.00028* (0.00012)
−0.00026* (0.00012)
−0.00021* (8.8e−05)
−0.00019* (7.5e−05)
Oilt−2
−0.00019 (0.00012)
−0.00019 (0.00012)
−0.00012 (8.9e−05)
−4.3e−05 (6.8e−05)
Monetaryt
−0.0088 (0.059)
−0.03 (0.058)
−0.057 (0.043)
−0.044 (0.032)
Monetaryt−1
−0.08 (0.061)
−0.065 (0.059)
0.012 (0.047)
−0.013 (0.033)
Monetaryt−2
−0.061 (0.059)
−0.048 (0.058)
0.031 (0.046)
0.095** (0.033)
Γt
4.5** (0.82)
4.2** (0.88)
3.7** (0.69)
4** (0.66)
Γt−1
4.3** (0.78)
4.5** (0.85)
2.8** (0.71)
3.6** (0.68)
Γt−2
2.7** (0.79)
2.7** (0.8)
2.6** (0.69)
2.8** (0.63)
0.69** (0.2)
0.83** (0.19)
rt
0.66* (0.26)
rt−1
−1.6** (0.35)
rt−2
1** (0.29)
−1.5** (0.28) 0.85** (0.23)
−1.5** (0.27) 0.7** (0.22)
−0.49 (0.52)
−0.11 (0.41)
−0.13 (0.38)
Term spreadt−1
0.17 (0.52)
−0.34 (0.41)
−0.37 (0.42)
Term spreadt−2
0.31 (0.39)
−0.02 (0.32)
−0.18 (0.33)
55 0.551 0.495
55 0.755 0.706
55 0.832 0.767
Term spreadt
N R2 Adj. R2
55 0.133 0.0824
55 0.0768 0.0225
55 0.189 0.0878
55 0.506 0.477
55 0.582 0.498
a For the year t = 1952 to 2008, per capita GDP growth is regressed on the granular residual Γ of the top 100 firms t (equation (34)), and the contemporaneous and lagged values of the Hamilton (for oil) shocks, and Romer–Romer (for money) shocks. The firms are the largest by sales of the previous year. Standard errors are given in parentheses.
758
XAVIER GABAIX
present paper. It could be that the recent factor-analytic methods (Stock and Watson (2002), Foerster, Stock, and Watson (2008)) will prove useful for extending the analysis. One difficulty is that the identities of the top firms change over time, unlike in the typical factor-analytic setup. This said, another way to understand granular shocks is to examine some of them directly, a task to which I now turn. 3.4. A Narrative of GDP and the Granular Residual Figure 2 presents a scatter plot with 34Γt + 34Γt−1 , where the coefficients are those from Table II. I present a narrative of the most salient events in that graph.22 Some notations are useful. The firm-specific granular residual S (or granular contribution) is defined to be Γit = Yit−1 git with git = git − gIi t . t−1 The share of the industry-demeaned granular residual (GR) is defined as γit = Γit /Γt , and the share of GDP growth is defined as Γit /gY t , where gY t is the growth rate of GDP per capita minus its average value in the sample, for short “demeaned GDP growth.” Given the regression coefficients in Tables I and II, this share should arguably be multiplied by a productivity multiplier μ 3.
FIGURE 2.—Growth of GDP per capita against 34Γt + 34Γt−1 , the industry-demeaned granular residual and its lagged value. The display of 34Γt + 34Γt−1 is motivated by Table II, which yields regression coefficients on Γt and Γt−1 of that magnitude. 22 A good source for firm-level information besides Compustat is the web site fundinguniverse.com, which compiles a well referenced history of the major companies. Google News, the yearly reports of the Council of Economic Advisors, and Temin (1998) are also useful.
GRANULAR ORIGINS OF AGGREGATE FLUCTUATIONS
759
To obtain a manageable number of important episodes, I report the events with |gY t | ≥ 07σY , and in those years, report the firms for which |Γit /gY t | ≥ 014. I also consider all the most extreme fifth of the years for Γt . I avoid, however, most points that are artefacts of mergers and acquisitions (more on that later). To avoid boring the reader with too many tales of car companies, I add a few non-car events that I found interesting economically or methodologically. A general caveat is that the direction of the causality is hard to assess definitively, as the controls gIi t for industry-wide movements are imperfect. With that caveat in mind, we can start reading Table V. To interpret the table, let me take a salient and relatively easy year, 1970. This year features a major strike at General Motors, which lasted 10 weeks (September 15 to November 20). The 1970 row of Table V shows that GM’s sales fell by 31% and employment fell by 13%. Its labor productivity growth rate is thus −179% and, controlling for the industry mean productivity growth of 26% that year, GM’s demeaned growth rate is −205%. Given that GM’s sales the previous year were 2.47% of GDP, GM’s granular residual is Γit = −020 × 247% = −049%. That means the direct impact of this GM event is a change in GDP by −049% that year. Note also that with a productivity multiplier of μ 3, the imputed impact of GM on GDP is −147%. As GDP growth that year was 3% below trend (gY t = −3%), the direct share of the GM event is 049%/3% = 016 and its full imputed share is 147%/3% = 049. In some mechanical sense, the GM event appears to account for a fraction 0.17 of the GDP movement directly and, indirectly, for about 0.5 of the GDP innovation that year. It also accounts for a fraction 0.76 of the granular residual. Hence, it is plausible to interpret 1970 as a granular year, whose salient event was the GM strike and the turmoils around it.23 This example shows how the table is organized. Let me now present the rest of the narrative. 1952–1953: U.S. Steel faces a strike from about April 1952 to August 1952. U.S. Steel’s production falls by 13.1% in 1952 and rebounds by 19.5% in 1953. The 1953 events explains a share of 3.99 of the granular residual and 0.06 of excess GDP growth. 1955 experiences a high GDP growth, and a reasonably high granular residual. The likely microfoundation is a boom in car production. Two main specific factors seem to explain the car boom: the introduction of new models of cars and the fact that car companies engaged in a price war (Bresnahan (1987)). The car sales of GM increase by 21.9%, while employment increases by 7.9%. The demeaned growth rate is git = 178%. GM accounts for 81% of the gran23 Temin (1998) noted that the winding down of the Vietnam War (which ended in 1975) may also be responsible for the slump of 1970. This is in part the case, as during 1968–1972 the ratios of defense outlays to GDP were 9.5, 8.7, 8.1, 7.3, and 6.7%. On the other hand, the ratio of total government outlays to GDP were, respectively, 20.6, 19.4, 19.3, 19.5, and 19.6% (source: Council of Economic Advisors (2005, Table B-79)). Hence the aggregate government spending shock was very small in 1970.
760
TABLE V NARRATIVEa Share of GDP
Labor Prod. Growth
Demeaned Growth
Gran. Res.
Share
Direct
Imputed
Sit−1 Yt−1
git
git − gI t i
Γit
of GR
Share of gY t
Share of gY t
in %
in %
Γit Γt
Γit gY t
μΓit gY t
−3.56
−0.037
0.061
−0.810
−2.430
Year
Firm
in %
[ ln Sit , ln Lit ]
Brief Explanation
1952
U.S. Steel
1.03
−10.75 [−13.10, −2.35]
1953
U.S. Steel
0.87
17.06 [19.51, 2.45]
5.86
0.051
3.985
0.060
0.180
Rebound from strike
1955
GM
2.58
14.00 [21.89, 7.88]
17.84
0.461
0.808
0.142
0.426
Boom in car production: New models and price war
1956
Ford
1.35
−20.95 [−21.96, −1.01]
−20.72
−0.270
0.407
0.145
0.435
End of price war
1956
GM
3.00
−13.55 [−17.61, −4.06]
−13.32
−0.400
0.603
0.215
0.645
End of price war
1957
GM
2.47
0.36 [−1.50, −1.85]
−12.38
−0.305
2.201
0.167
0.501
End of price war (aftermath)
1961
Ford
1.00
25.12 [23.64, −1.48]
27.03
0.199
4.131
−0.147
−0.441
1965
GM
2.56
7.45 [18.06, 10.61]
11.10
0.284
0.600
0.092
0.276
Boom in new-car sales
1967
Ford
1.55
−19.84 [−18.23, 1.61]
−14.91
−0.232
2.461
0.379
1.137
Strike
1970
GM
2.47
−17.85 [−31.06, −13.20]
−20.52
−0.493
0.757
0.165
0.495
Strike
1971
GM
1.81
25.58 [36.15, 10.57]
23.35
0.361
0.516
7.344
22.032
Strike
Rebound from strike (Continues)
XAVIER GABAIX
Success of compact Falcon (rebound from Edsel failure)
TABLE V—Continued Labor Prod. Growth
Demeaned Growth
Gran. Res.
Share
Direct
Imputed
Sit−1 Yt−1
git
git − gI t i
Γit
of GR
Share of gY t
Share of gY t
Γit Γt
Γit gY t
μΓit gY t
Year
Firm
in %
[ ln Sit , ln Lit ]
in %
in %
Brief Explanation
1972
Chrysler
0.71
16.76 [15.64, −1.13]
17.80
0.126
0.234
0.058
0.174
Rush of sales for subcompacts (Dodge Dart and Plymouth Valiant)
1972
Ford
1.46
14.18 [16.36, 2.18]
15.22
0.222
0.411
0.103
0.309
Rush of sales for subcompacts (Ford Pinto)
1974
GM
2.59
−11.31 [−21.28, −9.97]
−15.23
−0.394
0.913
0.115
0.345
Cars with poor gas mileage hit by higher oil price
1983
IBMb
1.06
10.46 [11.76, 1.29]
10.52
0.111
0.177
0.071
0.213
Launch of the IBM PC
1987
GEb
0.79
25.62 [8.33, −17.29]
21.46
0.158
1.110
0.357
1.071
Moving out of manufacturing and into finance and high-tech
1988
GEb
0.83
21.42 [20.08, −1.33]
16.55
0.137
0.441
0.117
0.351
Moving out of manufacturing and into finance and high-tech
1996
AT&T
1.08
38.97 [−44.11, −83.08]
32.45
0.215
0.471
0.446
1.338
Spin-off of NCR and Lucent
2000
GE
1.20
20.56 [12.29, −8.27]
33.04
0.239
9.934
0.468
1.404
Sales topped $111bn, expansion of GE Medical Systems
2002
Walmart
2.16
8.61 [9.83, 1.22]
6.39
0.138
3.219
−0.099
−0.297
Success of lean distribution model
a GE and GM are General Electric and General Motors, respectively. For each firm i, g , ln S , and ln L denote productivity, sales, and employment growth rates, it it it Sit−1 (git −gI t ) i , and Γ /Γ is the respectively, git − gI t denotes industry-demeaned growth, and Sit−1 /Yt−1 is the sales share of GDP. The firm granular residual is Γit = it t Yt−1 i
i
761
respective share of the granular residual. Γit /gY t is the direct share of the firm shock on demeaned GDP growth. The full share would be equal to μΓit /gY t , where μ = 3 is the typical productivity multiplier estimated from Tables I and II. b There is just one firm in this industry in the top 100, hence g I t was replaced by gt .
GRANULAR ORIGINS OF AGGREGATE FLUCTUATIONS
Share of GDP
762
XAVIER GABAIX
ular residual, a direct fraction 0.14 of excess GDP growth, and an imputed fraction of 0.43 of excess GDP growth. 1956–1957: In 1956, the price war in cars ends, and sales drop back to their normal level (the sales of General Motors decline by 17.6%; those of Ford decline by 22%). The granular residual is −066%, of which 60% is due to General Motors. Hence, one may provisionally conclude the 1955–1956 boom– bust episode was in large part a granular event driven by new models and a price war in the car industry.24 In Figure 2, the 56 point is actually the sum of 1955 (granular boom) and 1956 (granular bust), and is unremarkable, but the bust is reflected in the 1957 point, which is the most extreme negative point in the granular residual. 1961: In previous years, Ford cancelled the Edsel brand and introduces to great success the Falcon, the leading compact car of its time. Ford’s demeaned growth rate is git = 27% and its firm granular residual explains a fraction −015 of excess GDP growth. That is, without Ford’s success, the recession would have been worse. 1965 is an excellent year for GM, with the great popularity of its Chevrolet brand. 1967: Ford experiences a 64-day strike and a terrible year. Its demeaned growth rate is −149% and its granular residual is −023%. It explains a fraction 2.5 of the granular residual and 0.38 of GDP growth. 1970 is the GM year described above. 1971, which appears in Figure 2 as label “72,” representing the sum of the granular residuals in 1971 and 1972, is largely characterized by the rebound from the negative granular 1970 shock. Hence, the General Motors strike may explain the very negative 70 (1969 + 1970) point and the very positive 72 (1971 + 1972) point. Sales increase by 36.2% and employment increases by 10.6%. The firm granular residual is Γit = 036% for a fraction of granular residual of 0.52. Another interesting granular event takes place in 1971. The Council of Economic Advisors (1972, p. 33) reports that “prospects of a possible steel strike after July 31st [1971], the expiration day of the labor contracts, caused steel consumers to build up stock in the first seven months of 71, after which these inventories were liquidated.” Here, a granular shock—the possibility of a steel strike—creates a large swing in inventories. Without exploring inventories here, one notes that such a plausibly orthogonal inventory shock could be used in future macroeconomic studies. 1972 is a very good year for Ford and Chrysler. Ford has an enormous success with its Pinto. At Chrysler, there is a rush of sales for the compact Dodge Dart and Plymouth Valiant (low-priced subcompacts). For those two firms, Γit = 022% and Γit = 013%, respectively. 1974 is probably not a granular year, because the oil shock was common to many industries. Still, the low value of the granular residual reflects the 24
To completely resolve the matter, one would like to control for the effect of the Korean war.
GRANULAR ORIGINS OF AGGREGATE FLUCTUATIONS
763
fact that the top three car companies, and particularly General Motors, were disproportionately affected by the shock. It is likely that if large companies were producing more fuel efficient cars, the granular residual would have been closer to 0, and the slump of 1974 could have been much more moderate. For instance, GM’s granular contribution is −039%, and its multiplier-adjusted contribution −118%. 1983 is an excellent year for IBM, with the launch of the IBM PC. Its git = 105%, so that its granular residual is 011%. 1987–1988 is an instructive year, in part for methodological reasons. After various investments and mergers and acquisitions in 1986–1987 (acquisition of financial services providers, e.g., KidderPeabody, and high-tech companies such as medical diagnostics business), the clear majority of GE’s earnings (roughly 80%, compared to 50% 6 years earlier) were generated in services and high technology. Its git is 26% and 21% in 1987 and 1988, respectively. Its fraction of the granular residual is 1.11 and 0.44, and its imputed growth fraction is 1.07 and 0.35. This episode can be viewed either as a purely formal reallocation of titles in economic activity (in which case it arguably should be discarded) or as a movement of “structural change” where this premier firm’s efforts (human and physical) are reallocated toward higher value-added activities, thereby potentially increasing economic activity.25 The same can be said about the next event. 1996: There is an intense restructuring at AT&T, with a spin-off of NCR and Lucent. AT&T recenters to higher productivity activities, and as a result its measured git is 32.5%. This movement explains a fraction 0.47 of the granular residual and 0.45 of GDP growth. 2000 is a year of great productivity growth for GE, in particular via the expansion of GE Medical Systems. Its git is 20.6% and its firm granular residual is Γit = 024%. 2002 sees a surge in sales for Walmart, a vindication of its lean distribution model. The company’s share of the U.S. GDP in 2001 was 2.2%. This approached the levels reached by GM (3% in 1956) and U.S. Steel Corp. (2.8% in 1917) when these firms were at their respective peaks. Its git is 64% and its fraction of the granular residual is 3.22, while its fraction of demeaned GDP growth is −010. We arrive at the limen of the financial crisis. 2007 sees three interesting granular events (not reported in the table) if one is willing to accept the “sales” of financial firms as face value (it is unclear they should be). The labor productivity growth of AIG, Citigroup, and Merrill Lynch is −15%, −9%, and −25%, respectively, which gives them granular contributions of −009%, −018%, 25
Under the first interpretation, it would be interesting to build a more “purified” granular residual that filters out corporate finance events. Of course, to what extent those events should be filtered out is debatable.
764
XAVIER GABAIX
and −010%. It would be interesting to exploit the hypothesis that the financial crisis was largely caused by the (ex post) mistakes of a few large firms, e.g., Lehman and AIG. Their large leverage and interconnectedness amplified into a full-fledged crisis instead of what could have been a run-of-the-mill that would have affected in a diffuse way the financial sector. But doing justice to this issue would require another paper. Figure 2 reveals that, in the 1990’s, granular shocks are smaller. Likewise, GDP volatility is smaller—reflecting the “great moderation” explored in the literature (e.g., McConnell and Perez-Quiros (2000)). Carvalho and Gabaix (2010) explored this link in more depth, and proposed that indeed the decrease in granular volatility explains the great moderation of GDP and its demise. Finally, the bottom of Figure 2 contains three outliers that are not granular years. They have conventional “macro” interpretations. 1954 is often attributed to the end of the Korean War, and 1958 and 1982 (the “Volcker recession”) are attributed to tight monetary policy aimed at fighting inflation. This narrative shows the importance of two types of events: some (e.g., a strike) inherently have a negative autocorrelation, while some others (e.g., new models of cars) do not. It is conceivable that forecasting could be improved by taking into account that distinction. 4. CONCLUSION This paper shows that the forces of randomness at the microlevel create an inexorable amount of volatility at the macro level. Because of random growth at the microlevel, the distribution of firm sizes is very fat-tailed (Simon (1955), Gabaix (1999), Luttmer (2007)). That fat-tailedness makes the central limit theorem break down, and idiosyncratic shocks to large firms (or, more generally, to large subunits in the economy such as family business groups or sectors) affect aggregate outcomes. This paper illustrates this effect by taking the example of GDP fluctuations. It argues that idiosyncratic shocks to the top 100 firms explain a large fraction (one-third) of aggregate volatility. While aggregate fluctuations such as changes to monetary, fiscal, and exchange rate policy, and aggregate productivity shocks are clearly important drivers of macroeconomic activity, they are not the only contributors to GDP fluctuations. Using theory, calibration, and direct empirical evidence, this paper makes the case that idiosyncratic shocks are an important, and possibly the major, part of the origin of business-cycle fluctuations. The importance of idiosyncratic shocks in aggregate volatility leads to a number of implications and directions for future research. First, and most evidently, to understand the origins of fluctuations better, one should not focus exclusively on aggregate shocks, but concrete shocks to large players, such as General Motors, IBM, or Nokia. Second, shocks to large firms (such as a strike, a new innovation, or a CEO change), initially independent of the rest of the economy, offer a rich source of
GRANULAR ORIGINS OF AGGREGATE FLUCTUATIONS
765
shocks for vector autoregressions (VARs) and impulse response studies—the real-side equivalent of the Romer and Romer shocks for monetary economics. As a preliminary step in this direction, the granular residual, with a variety of specifications, is available in the Supplemental Material. Third, this paper gives a new theoretical angle for the propagation of fluctuations. If Apple or Walmart innovates, its competitors may suffer in the short term and thus race to catch up. This creates rich industry-level dynamics (that are already actively studied in the industrial organization literature) that should be useful for studying macroeconomic fluctuations, since they allow one to trace the dynamics of productivity shocks. Fourth, this argument could explain the reason why people, in practice, do not know the state of the economy. This is because “the state of the economy” depends on the behavior (productivity and investment behavior, among others) of many large and interdependent firms. Thus, the integration is not easy and no readily accessible single number can summarize this state. This contrasts with aggregate measures, such as GDP, which are easily observable. Conversely, agents who focus on aggregate measures may make potentially problematic inferences (see Angeletos and La’O (2010) and Veldkamp and Wolfers (2007) for research along those lines). This paper could therefore offer a new mechanism for the dynamics of “animal spirits.” Finally this mechanism might explain a large part of the volatility of many aggregate quantities other than output, for instance, inventories, inflation, shortor long-run movements in productivity, and the current account. Fluctuations of exports due to granular effects are explored in Canals et al. (2007) and di Giovanni and Levchenko (2009). The latter paper in particular finds that lowering trade barriers increases the granularity of the economy (as the most productive firms are selected) and implies an increase in the volatility of exports. Blank, Buch, and Neugebauer (2009) constructed a “banking granular residual” and found that negative shocks to large banks negatively impact small banks. Malevergne, Santa-Clara, and Sornette (2009) showed that the granular residual of stock returns (the return of a large firm minus a return of the average firm) is an important priced factor in the stock market and explained the performance of Fama–French factor models. Carvalho and Gabaix (2010) found that the time-series changes in granular volatility predict well the volatility of GDP, including the “great moderation” and its demise. In sum, this paper suggests that the study of very large firms can offer a useful angle of attack on some open issues in macroeconomics. APPENDIX A: LÉVY’S THEOREM Lévy’s theorem (Durrett (1996, p. 153)) is the counterpart of the central limit theorem for infinite-variance variables. THEOREM 1—Lévy’s Theorem: Suppose that X1 X2 are i.i.d. with a distribution that satisfies (i) limx→∞ P(X1 > x)/P(|X1 | > x) = θ ∈ [0 1] and
766
XAVIER GABAIX
−ζ 26 (ii) n P(|X1 | > x) = x L(x), with ζ ∈ (0 2) and L(x) slowly varying. Let sn = i=1 Xi , an = inf{x : P(|X1 | > x) ≤ 1/n}, and bn = nE[X1 1|X1 |≤an ]. As n → ∞, (sn − bn )/an converges in distribution to a nondegenerate random variable Y , which follows a Lévy distribution with exponent ζ.
The most typical use of Lévy’s theorem is the case of a symmetrical distribution with zero mean and power-law distributed tails, P(|X1 | > x) ∼ (x/x0 )−ζ . N d Then an ∼ x0 n1/ζ , bn = 0, and (x0 N 1/ζ )−1 i=1 Xi → Y , where Y follows a Lévy N distribution. The sum i=1 Xi does not scale as N 1/2 , as it does in the central limit theorem, but it scales as N 1/ζ . This is because the size of the largest units Xi scales as N 1/ζ . A symmetrical Lévy distribution with exponent ζ ∈ (0 2] has the distribution
∞ −k ζ 1 λ(x ζ) = π 0 e cos(kx) dk. For ζ = 2, a Lévy distribution is Gaussian. For ζ < 2, the distribution has a power-law tail with exponent ζ. APPENDIX B: LONGER DERIVATIONS PROOF OF PROPOSITION 3: To simplify notations, using homogeneity, I consider the case σ = Smin = 1. As Si /Si = Si−α ui , N
(35)
Y = Y
N
Si
i=1
Y
=
Si1−α ui
i=1 N
Si
i=1
N When ζ > 1, by the law of large numbers, N −1 Y = N −1 i=1 Si → S. To tackle the numerator, I observe that Si1−α has power-law tails with exponent ζ = ζ/(1 − α). I consider two cases. First, if ζ < 2, define xi = Si1−α ui , which has power-law tails with exponent ζ , and prefactor given by, for x → ∞, 1/(1−α) x P(|Si1−α ui | > x) = P Si > |ui | ζ/(1−α) −ζ/(1−α)
= E |ui |ζ x−ζ ∼ E |ui | x
Hence in Lévy’s theorem, the aN factor is aN ∼ N 1/ζ E[|ui |ζ ]1/ζ and Y =
N
1/ζ
Si1−α ui ∼ N 1/ζ E |ui |ζ gζ
i=1 26 L(x) is said to be slowly varying if ∀t > 0 limx→∞ L(tx)/L(x) = 1. Prototypical examples are L(x) = c and L(x) = c ln x for a nonzero constant c.
GRANULAR ORIGINS OF AGGREGATE FLUCTUATIONS
767
where gζ is a Lévy with exponent ζ . Hence, given S = E[S] = Smin ζ/(ζ − 1),
E[|ui |ζ ]1/ζ N 1/ζ ζ − 1 ζ 1/ζ 1/ζ −1 Y gζ = ∼ E |ui | N gζ Y ζ SN equal to E[S 2 σ(S)2 ]E[u2i ]. By the If ζ = 2, Si σ(Si )ui has finite √ variance central limit theorem, Y ∼ NE[S 2 σ(S)2 ]1/2 E[u2i ]g2 , where g2 is a standard Gaussian distribution and Y Y 1 E[S 2 σ(S)2 ]1/2 E[u2i ]1/2 ∼ ∼√ g2 Y N NS S as announced. When ζ = 1, a ln N correction appears because of (14), but the reasoning is otherwise the same. Q.E.D. Hulten’s Theorem With and Without Instantaneous Reallocation of Factors For clarity, I rederive and extend Hulten’s (1978) result, which says that a Hicks-neutral productivity shock dπi to firm i causes an increase in TFP given by equation (15) (see also Jones (2011) for consequences of this result). There are N firms. Firm i produces good i and uses a quantity Xij of intermediary inputs from firm j. It also uses Li units of labor and Ki units of capital. It has productivity πi . If production is: Qi = eπi F i (Xi1 XiN Li Ki ), the representative agent consumes Ci of good i and has a utility function of U(C1 CN ). Production of firm i serves as consumption and intermediary inputs, so Qi = Ci + k Xki . The optimum in this economy reads maxCi Xik Li Ki U(C1 CN ) subject to Ci + Xki = eπi F i (Xi1 XiN Li Ki );
k
Li = L;
i
Ki = K
i
The Lagrangian is W = U(C1 CN ) πi i pi e F (Xi1 XiN Li Ki ) − Ci − Xki + i
+w L−
i
Li + r K −
Ki
k
i
Assume marginal cost pricing. GDP in this economy is Y = wL + rK = p C . The value added of firm i is wLi + rKi and its sales are pi Qi . i i i
768
XAVIER GABAIX
If each firm i has a shock dπi to productivity, welfare changes as dW = pi [eπi F i (Xi1 XiN Li Ki ) dπi ] i
=
(sales of firm i) dπi
i
∂U As dW can be rewritten dW = i ∂C dC i = i pi dCi = dY , we obtain equai tion (15). The above analysis shows that Hulten’s theorem holds even if, after the shock, the capital, labor and material inputs are not reallocated. This is a simple consequence of the envelope theorem. Hence Hulten’s result also holds if there are frictions in the adjustment of labor, capital, or intermediate inputs. PROOF OF PROPOSITION 4: We have Γt (K Q) − Γt∗ (K) =
K Sit−1 i=1
=
Yt−1
K Sit−1 i=1
Yt
= (β − β)
( εit − εit )
Xit ) − (git − β Xit )) ((git − β K Sit−1 i=1
Yt
Xit
→ 0 almost surely (a.s.) when K or Q → ∞ (by standard results We have β − β ∞ S K S Xit → i=1 it−1 Xit in the L2 sense by assumption (ii). on OLS), and i=1 it−1 Yt Yt ∗ Q.E.D. Hence Γt (K Q) − Γt (K Q) → 0 a.s. APPENDIX C: DATA APPENDIX Firm-Level Data The firm-level data come from the Fundamentals Annual section of the Compustat North America database on WRDS. The data consist of year by firm observations from 1950 to 2008 of the following variables: SIC code (SIC), net sales in $MM (DATA 12), and employees in M (DATA 29). I exclude foreign firms based in the United States, restricting the data set to firms whose fic and loc codes are equal to USA. I filter out oil and oil-related companies (SIC codes 2911, 5172, 1311, 4922, 4923, 4924, and 1389), and energy companies (SIC code between 4900 and 4940), as fluctuations of their sales come mostly from worldwide commodity prices, rather than real productivity shocks, and
GRANULAR ORIGINS OF AGGREGATE FLUCTUATIONS
769
financial firms (SIC code between 6000 and 6999), because their sales do not mesh well with the meaning used in the present paper.27 An important caveat is in order for U.S. firms. With Compustat, the sales of Ford, say, represent the worldwide sales of Ford, not directly the output produced by Ford in the United States. There is no simple solution to this problem, especially if one wants a long time series. An important task of future research is to provide a version of Compustat that corrects for multinationals. Macroeconomic Data The real GDP, GDP per capita, and inflation index data series all come from the Bureau of Economic Analysis. The Solow residual is the multifactor productivity of the private business sector reported by the Bureau of Labor Studies. The term spread and real interest rate data are from the Fama–Bliss Center for Research in Security Prices (CRSP) Monthly Treasury data base on WRDS. The data for the Romer and Romer (2004) monetary policy shocks come from David Romer’s web page. Their original series (RESID) is monthly from 1969 to 1996. Here the yearly Romer–Romer shock is the sum of the 12 monthly shocks in that year. For the years not covered by Romer and Romer, the value of the shock is assigned to be 0, the mean of the original data. This assignment does not bias the regression coefficient under simple conditions, for instance if the data are i.i.d. It does lower the R2 by the fraction of missed variance; fortunately, most large monetary shocks (e.g., of the 1970’s and 1980’s) are in the data set. The data for the Hamilton (2003) oil shocks primarily come from James Hamilton’s web page. This series is quarterly and runs until 2001. It is defined as the amount by which the current oil price exceeds the maximum value over the past year. This paper’s yearly shock is the sum of the quarterly Hamilton shocks. The spot price for oil reported by the St. Louis Federal Reserve is used to extend the series to the present. REFERENCES ABRAMOVTIZ, M. (1956): “Resource and Output Trends in the United States Since 1870,” American Economic Review, 46, 5–23. [735] ACEMOGLU, D., S. JOHNSON, AND T. MITTON (2009): “Determinants of Vertical Integration: Financial Development and Contracting Costs,” Journal of Finance, 64, 1251–1290. [746] ACEMOGLU, D., A. OZDAGLAR, AND A. TAHBAZ-SALEHI (2010): “Cascades in Networks and Aggregate Volatility,” Working Paper, MIT. [744] ANGELETOS, G. M., AND J. LA’O (2010): “Noisy Business Cycles,” in NBER Macroeconomics Annual. Cambridge: MIT Press. [765] AXTELL, R. (2001): “Zipf Distribution of U.S. Firm Sizes,” Science, 293, 1818–1820. [739] 27
The results with financial firms are very similar.
770
XAVIER GABAIX
BAK, P., K. CHEN, J. SCHEINKMAN, AND M. WOODFORD (1993): “Aggregate Fluctuations From Independent Sectoral Shocks: Self-Organized Criticality in a Model of Production and Inventory Dynamics,” Ricerche Economiche, 47, 3–30. [736,737] BASU, S., J. FERNALD, AND M. KIMBALL (2006): “Are Technology Improvements Contractionary?” American Economic Review, 96, 1418–1448. [756] BLANK, S., C. BUCH, AND K. NEUGEBAUER (2009): “Shocks at Large Banks and Banking Sector Distress: The Banking Granular Residual,” Discussion Paper 04/2009, Deutsche Bundesbank. [765] BRESNAHAN, T. (1987): “Competition and Collusion in the American Automobile Market: The 1955 Price War,” Journal of Industrial Economics, 35, 457–482. [759] CABALLERO, R., E. M. R. A. ENGEL, AND J. C. HALTIWANGER (1997): “Aggregate Employment Dynamics: Building From Microeconomic Evidence,” American Economic Review, 87, 115–137. [737,745] CANALS, C., X. GABAIX, J. VILARRUBIA, AND D. WEINSTEIN (2007): “Trade Patterns, Trade Balances, and Idiosyncratic Shocks,” Working Paper, Columbia University. [734,765] CARVALHO, V. (2009): “Aggregate Fluctuations and the Network Structure of Intersectoral Trade,” Working Paper, CREI. [744] CARVALHO, V., AND X. GABAIX (2010): “The Great Diversification and Its Undoing,” Working Paper 16424, NBER. [749,764,765] CLAESSENS, S., S. DJANKOV, AND L. H. P. LANG (2000): “The Separation of Ownership and Control in East Asian Corporations,” Journal of Financial Economics, 58, 81–112. [737] COCHRANE, J. (1994): “Shocks,” Carnegie–Rochester Conference Series on Public Policy, 41, 295–364. [733] COMIN, D., AND S. MULANI (2006): “Diverging Trends in Macro and Micro Volatility: Facts,” Review of Economics and Statistics, 88, 374–383. [745] CONLEY, T., AND B. DUPOR (2003): “A Spatial Analysis of Sectoral Complementarity,” Journal of Political Economy, 111, 311–352. [737] COUNCIL OF ECONOMIC ADVISORS (1972): Economic Report of the President Transmitted to the Congress. Washington, DC: U.S. Government Printing Office. [762] (2005): Economic Report of the President Transmitted to the Congress. Washington, DC: U.S. Government Printing Office. [759] DAVIS, S., J. C. HALTIWANGER, AND S. SCHUH (1996): Job Creation and Destruction. Cambridge, MA: MIT Press. [745] DI GIOVANNI, J., AND A. LEVCHENKO (2009): “International Trade and Aggregate Fluctuations in Granular Economies,” Working Paper, University of Michigan. [734,765] DUPOR, W. (1999): “Aggregation and Irrelevance in Multi-Sector Models,” Journal of Monetary Economics, 43, 391–409. [737,743] DURLAUF, S. (1993): “Non Ergodic Economic Growth,” Review of Economic Studies, 60, 349–366. [736] DURRETT, R. (1996): Probability: Theory and Examples. Belmont, CA: Wadsworth. [765] FACCIO, M., AND L. H. P. LANG (2002): “The Ultimate Ownership of Western European Corporations,” Journal of Financial Economics, 65, 365–395. [737] FOERSTER, A., P.-D. SARTE, AND M. WATSON (2008): “Sectoral vs. Aggregate Shocks: A Structural Factor Analysis of Industrial Production,” Working Paper 08-07, Federal Reserve Bank of Richmond. [758] GABAIX, X. (1999): “Zipf’s Law for Cities: An Explanation,” Quarterly Journal of Economics, 114, 739–767. [764] (2009): “Power Laws in Economics and Finance,” Annual Review of Economics, 1, 255–293. [740] (2011): “Supplement to ‘The Granular Origins of Aggregate Fluctuations’,” Econometrica Supplemental Material, 79, http://www.econometricsociety.org/ecta/Supmat/8769_ extensions.pdf; http://www.econometricsociety.org/ecta/Supmat/8769_data and programs. zip.zip. [752]
GRANULAR ORIGINS OF AGGREGATE FLUCTUATIONS
771
HALL, R. (2009): “By How Much Does GDP Rise If the Government Buys More Output?” Brookings Papers on Economic Activity, 2, 183–231. [746] HAMILTON, J. D. (2003): “What Is an Oil Shock?” Journal of Econometrics, 113, 363–398. [754, 769] HORVATH, M. (1998): “Cyclicality and Sectoral Linkages: Aggregate Fluctuations From Sectoral Shocks,” Review of Economic Dynamics, 1, 781–808. [737,743] (2000): “Sectoral Shocks and Aggregate Fluctuations,” Journal of Monetary Economics, 45, 69–106. [737,743] HULTEN, C. (1978): “Growth Accounting With Intermediary Inputs,” Review of Economic Studies, 45, 511–518. [744,767] JONES, C. I. (2011): “Intermediate Goods and Weak Links: A Theory of Economic, Development,” American Economic Journal: Macroeconomics (forthcoming). [767] JORGENSON, D. W., F. M. GOLLOP, AND B. M. FRAUMENI (1987): Productivity and U.S. Economic Growth. Cambridge, MA: Harvard University Press. [752] JOVANOVIC, B. (1987): “Micro Shocks and Aggregate Risk,” Quarterly Journal of Economics, 102, 395–409. [736] KOREN, M., AND S. TENREYRO (2007): “Volatility and Development,” Quarterly Journal of Economics, 122, 243–287. [749] KYDLAND, F. E., AND E. C. PRESCOTT (1982): “Time to Build and Aggregate Fluctuations,” Econometrica, 50, 1345–1370. [735] LEE, Y., L. A. N. AMARAL, M. MEYER, D. CANNING, AND H. E. STANLEY (1998): “Universal Features in the Growth Dynamics of Complex Organizations,” Physical Review Letters, 81, 3275–3278. [748,749] LONG, J., AND C. PLOSSER (1983): “Real Business Cycles,” Journal of Political Economy, 91, 39–69. [737,749] LUTTMER, E. G. J. (2007): “Selection, Growth, and the Size Distribution of Firms,” Quarterly Journal of Economics, 122, 1103–1144. [764] MALEVERGNE, Y., P. SANTA-CLARA, AND D. SORNETTE (2009): “Professor Zipf Goes to Wall Street,” Working Paper 15295, NBER. [765] MANSKI, C. F. (1993): “Identification of Endogenous Social Effects: The Reflection Problem,” Review of Economic Studies, 60, 531–542. [749,751] MCCONNELL, M., AND G. PEREZ-QUIROS (2000): “Output Fluctuations in the United States: What Has Changed Since the Early 1980’s,” American Economic Review, 90, 1464–1476. [764] NIREI, M. (2006): “Threshold Behavior and Aggregate Critical Fluctuations,” Journal of Economic Theory, 127, 309–322. [736] OECD (2004): Economic Survey of Finland, Issue 14. Paris: OECD Publishing. [733] ROMER, C., AND D. ROMER (2004): “A New Measure of Monetary Shocks: Derivation and Implications,” American Economic Review, 94, 1055–1084. [754,769] SHEA, J. (2002): “Complementarities and Comovements,” Journal of Money, Credit and Banking, 42, 412–433. [749] SIMON, H. (1955): “On a Class of Skew Distribution Functions,” Biometrika, 42, 425–440. [764] SORNETTE, D. (2006): Critical Phenomena in Natural Sciences. New York: Springer. [740] STANLEY, M. H. R., L. A. N. AMARAL, S. V. BULDYREV, S. HAVLIN, H. LESCHHORN, P. MAASS, M. A. SALINGER, AND H. E. STANLEY (1996): “Scaling Behaviour in the Growth of Companies,” Nature, 379, 804–806. [747] STOCK, J., AND M. WATSON (2002): “Forecasting Using Principal Components From a Large Number of Predictors,” Journal of the American Statistical Association, 97, 1167–1179. [758] SUTTON, J. (2002): “The Variance of Firm Growth Rates: The ‘Scaling’ Puzzle,” Physica A, 312, 577–590. [748] TEMIN, P. (1998): “The Causes of American Business Cycles: An Essay in Economic Historiography,” in Beyond Shocks, ed. by J. C. Fuhrer and S. Schuh. Boston: Federal Reserve Bank. [758,759]
772
XAVIER GABAIX
VELDKAMP, L., AND J. WOLFERS (2007): “Aggregate Shocks or Aggregate Information? Costly Information and Business Cycle Comovement,” Journal of Monetary Economics, 54, 37–55. [765] WYART, M., AND J.-P. BOUCHAUD (2003): “Statistical Models for Company Growth,” Physica A, 326, 241–255. [748] ZIPF, G. (1949): Human Behavior and the Principle of Least Effort. Cambridge, MA: AddisonWesley. [740]
Stern School of Finance, New York University, 44 West 4th Street, Suite 9-190, New York, NY 10012, U.S.A. and CEPR and NBER;
[email protected]. Manuscript received August, 2009; final revision received October, 2010.
Econometrica, Vol. 79, No. 3 (May, 2011), 773–876
REPUTATION IN CONTINUOUS-TIME GAMES BY EDUARDO FAINGOLD AND YULIY SANNIKOV1 We study reputation dynamics in continuous-time games in which a large player (e.g., government) faces a population of small players (e.g., households) and the large player’s actions are imperfectly observable. The major part of our analysis examines the case in which public signals about the large player’s actions are distorted by a Brownian motion and the large player is either a normal type, who plays strategically, or a behavioral type, who is committed to playing a stationary strategy. We obtain a clean characterization of sequential equilibria using ordinary differential equations and identify general conditions for the sequential equilibrium to be unique and Markovian in the small players’ posterior belief. We find that a rich equilibrium dynamics arises when the small players assign positive prior probability to the behavioral type. By contrast, when it is common knowledge that the large player is the normal type, every public equilibrium of the continuous-time game is payoff-equivalent to one in which a static Nash equilibrium is played after every history. Finally, we examine variations of the model with Poisson signals and multiple behavioral types. KEYWORDS: Reputation, repeated games, incomplete information, continuous time.
1. INTRODUCTION REPUTATION PLAYS AN IMPORTANT ROLE in long-run relationships. Firms can benefit from reputation to fight potential entrants (Kreps and Wilson (1982), Milgrom and Roberts (1982)), to provide high quality to consumers (Klein and Leffler (1981)), or to generate good returns to investors (Diamond (1989)). Reputation can help time-inconsistent governments commit to noninflationary monetary policies (Barro (1986), Cukierman and Meltzer (1986)), low capital taxation (Chari and Kehoe (1993), Celentani and Pesendorfer (1996)), and repayment of sovereign debt (Cole, Dow, and English (1995)). In credence goods markets, strategic concerns to avoid a bad reputation can create perverse incentives that lead to market breakdown (Ely and Valimaki (2003)). We study reputation dynamics in repeated games between a large player (e.g., firm, government) and a population of small players (e.g., consumers, households) in which the actions of the large player are imperfectly observable. For example, the observed quality of a firm’s product may be a noisy outcome of the firm’s effort to maintain quality standards; the realized inflation rate may be a noisy signal of the central bank’s target monetary growth. Our setting is a 1 We are grateful to a co-editor and two anonymous referees for exceptionally helpful comments and suggestions. We also thank Daron Acemoglu, Martin Cripps, Kyna Fong, Drew Fudenberg, George J. Mailath, Eric Maskin, Stephen Morris, Bernard Salanie, Paolo Siconolfi, Andrzej Skrzypacz, Lones Smith, and seminar audiences at Bocconi, Columbia, Duke, Georgetown, the Institute for Advanced Studies at Princeton, UCLA, UPenn, UNC at Chapel Hill, Washington University in St. Louis, Yale, the Stanford Institute of Theoretical Economics, the Meetings of the Society for Economic Dynamics in Vancouver, and the 18th International Conference in Game Theory at Stony Brook for many insightful comments.
© 2011 The Econometric Society
DOI: 10.3982/ECTA7377
774
E. FAINGOLD AND Y. SANNIKOV
continuous-time analogue of the Fudenberg and Levine (1992) model. Specifically, we assume that a noisy signal about the large player’s actions is publicly observable and that the evolution of this signal is driven by a Brownian motion. The small players are anonymous and hence behave myopically in every equilibrium, acting to maximize their instantaneous expected payoffs. We follow the incomplete information approach to reputation pioneered by Kreps and Wilson (1982) and Milgrom and Roberts (1982), which assumes that the small players are uncertain as to which type of large player they face. Specifically, the large player can be either a normal type, who is forward looking and behaves strategically, or a behavioral type, who is committed to playing a certain strategy. As usual, we interpret the small players’ posterior probability on the behavioral type as a measure of the large player’s reputation. Recall that in discrete time two main limit results about reputation effects are known. First, in a general model with multiple behavioral and nonbehavioral types, Fudenberg and Levine (1992) provided upper and lower bounds on the set of equilibrium payoffs of the normal type which hold in the limit as he gets arbitrarily patient; when the public signals satisfy an identifiability condition and the stage-game payoffs satisfy a nondegeneracy condition, these asymptotic bounds are tight and equal the stage-game Stackelberg payoff. Second, Cripps, Mailath, and Samuelson (2004) showed that in a wide range of repeated games the power of reputation effects is only temporary: in any equilibrium, the large player’s type must be asymptotically revealed in the long run.2 However, apart from these two limit results (and their extensions to various important settings), not much is known about equilibrium behavior in reputation games. In particular, in the important case in which the signal distribution has full support, the explicit construction of even one sequential equilibrium appears to be an elusive, if not intractable, problem. By contrast, we obtain a clean characterization of sequential equilibria for fixed discount rates, using ordinary differential equations, by setting the model in continuous time and restricting attention to a single behavioral type. Using the characterization, we find that a rich equilibrium dynamics arises when the small players assign positive prior probability to the behavioral type, but not otherwise. Indeed, when the small players are certain that they are facing the normal type, we show that the only sequential equilibria of the continuous-time game are those which yield payoffs in the convex hull of the set of static Nash equilibria. By contrast, the possibility of behavioral types gives rise to nontrivial intertemporal incentives. In this case, we identify conditions for the sequential equilibrium to be unique and Markovian in the small players’ posterior belief, 2
For this result, Cripps, Mailath, and Samuelson (2004) assumed that (i) the public monitoring technology has full support and satisfies an identifiability condition, (ii) there is a single nonbehavioral type and finitely many behavioral types, (iii) the action of each behavioral type is not part of a static Nash equilibrium of the complete information game, and (iv) the small players have a unique best reply to the action of each behavioral type.
REPUTATION IN CONTINUOUS-TIME GAMES
775
and examine when a reputation yields a positive value for the large player. Then, under the equilibrium uniqueness conditions, we examine the impact of the large player’s patience on the equilibrium strategies and obtain a reputation result for equilibrium behavior, which strengthens the conclusion of the more standard reputation results concerning equilibrium payoffs. Finally, we extend some of our results to settings with multiple equilibria, Poisson signals and multiple behavioral types. Our characterization relies on the recursive structure of repeated games with public signals and on stochastic calculus. Both in discrete time and in continuous time, sequential equilibria in public strategies—hereafter, public sequential equilibria—can be described by the stochastic law of motion of two state variables: the large player’s reputation and his continuation value. The law of motion of the large player’s reputation is determined by Bayesian updating, while the evolution of his continuation value is characterized by promise keeping and incentive constraints. If and only if the evolution of the state variables satisfies these restrictions (as well as the condition that the continuation values are bounded), there is a public sequential equilibrium corresponding to these state variables. It follows that the set of equilibrium belief–continuation value pairs can be characterized as the greatest bounded set inside which the state variables can be kept while respecting Bayes’ rule and the promise keeping and incentive constraints (Theorem 2).3 In continuous time, it is possible to take a significant step further and use stochastic calculus to connect the equilibrium law of motion of the state variables with the geometry of the set of equilibrium beliefs and payoffs. This insight— introduced in Sannikov (2007) in the context of continuous-time games without uncertainty over types—allows us to characterize public sequential equilibria using differential equations.4 We first use this insight to identify an interesting class of reputation games that have a unique sequential equilibrium. Our main sufficient condition for uniqueness is stated in terms of a family of auxiliary one-shot games in which the stage-game payoffs of the large player are adjusted by certain “reputational weights.” When these auxiliary one-shot games have a unique Bayesian Nash equilibrium, we show that the reputation game must also have a unique public sequential equilibrium, which is Markovian in the large player’s reputation and characterized by a second-order ordinary differential equation (Theorem 4).5 We also provide sufficient conditions in terms of the primitives of the game, that is, stage-game payoffs, drift and volatility (Proposition 4). 3 In discrete time, this is a routine extension of the recursive methods of Abreu, Pearce, and Stacchetti (1990) to repeated games with uncertainty over types. 4 See also Sannikov (2008) for an application of this methodology to agency problems. 5 Recall that in a Markov perfect equilibrium (cf. Maskin and Tirole (2001)), the equilibrium behavior is fully determined by the payoff-relevant state variable, which, in our reputation game, is the small players’ posterior belief.
776
E. FAINGOLD AND Y. SANNIKOV
We use our characterization of unique Markovian equilibria to derive a number of interesting results about behavior. First, we show that when the large player’s static Bayesian Nash equilibrium payoff increases in reputation, his sequential equilibrium payoff in the continuous-time game also increases in reputation. Second, while the normal type of large player benefits from imitating the behavioral type, in equilibrium this imitation is necessarily imperfect; otherwise, the public signals would be uninformative about the large player’s type and imitation would have no value. Third, we find a square root law of substitution between the discount rate and the volatility of the noise: doubling the discount rate has the same effect on the√ equilibrium as rescaling the volatility matrix of the public signal by a factor of 2. Finally, we derive the following reputation result for equilibrium behavior: If the small players assign positive probability to a behavioral type committed to the Stackelberg action and the signals satisfy an identifiability condition, then, as the large player gets patient, the equilibrium strategy of the normal type approaches the Stackelberg action at every reputation level (Theorem 5). By contrast, when the small players are certain that they are facing the normal type, we find that every equilibrium of the continuous-time game must be degenerate, that is, a static Nash equilibrium must be played after every history (Theorem 3). This phenomenon has no counterpart in the discrete-time framework of Fudenberg and Levine (1992), where nontrivial equilibria of the complete information game are known to exist (albeit with payoffs bounded away from efficiency, as shown in Fudenberg and Levine (1994)). In discrete time, the large player’s incentives to play a nonmyopic best reply can be enforced by the threat of a punishment phase, which is triggered when the public signal about his hidden action is sufficiently “bad.” However, such intertemporal incentives may unravel as actions become more frequent, as first demonstrated in a classic paper by Abreu, Milgrom, and Pearce (1991) using a game with Poisson signals. Such incentives also break down under Brownian signals, as in the repeated Cournot duopoly with flexible production of Sannikov and Skrzypacz (2007) and also in the repeated commitment game with long-run and short-run players of Fudenberg and Levine (2007). The basic intuition underlying these results is that, under some signal structures, when players take actions, frequently the information they observe within each time period becomes excessively noisy, and so the statistical tests that trigger the punishment regimes produce false positives too often. However, this phenomenon arises only under some signal structures (Brownian signals and “good news” Poisson signals), as shown in contemporaneous work of Fudenberg and Levine (2007) and Sannikov and Skrzypacz (2010). We further discuss the discrete-time foundations of our equilibrium degeneracy result in Section 5 and in our concluding remarks in Section 10. While a rich structure of intertemporal incentives can arise in equilibrium only when the prior probability on the behavioral type is positive, there is no
REPUTATION IN CONTINUOUS-TIME GAMES
777
discontinuity when the prior converges to zero: Under our sufficient conditions for uniqueness, as the reputation of the large player tends to zero, for any fixed discount rate, the equilibrium behavior in the reputation game converges to the equilibrium behavior under complete information. In effect, the Markovian structure of reputational equilibria is closely related to the equilibrium degeneracy without uncertainty over types. When the stage game has a unique Nash equilibrium, the only equilibrium of the continuous-time game without uncertainty over types is the repetition of the static Nash equilibrium, which is trivially Markovian. In our setting with Brownian signals, continuous time prevents non-Markovian incentives created by rewards and punishments from enhancing the incentives naturally created by reputation dynamics. We go beyond games with a unique Markov perfect equilibrium and extend our characterization to more general environments with multiple sequential equilibria. Here, the object of interest is the correspondence of sequential equilibrium payoffs of the large player as a function of his reputation. In Theorem 6, we show that this correspondence is convex-valued and that its upper boundary is the greatest bounded solution of a differential inclusion (see, e.g., Aubin and Cellina (1984)), with an analogous characterization for the lower boundary. We provide a computed example illustrating the solutions to these differential inclusions. While a major part of our analysis concerns the case of a single behavioral type, most of the work on reputation effects in discrete time examines the general case of multiple behavioral types, as in Fudenberg and Levine (1992) and Cripps, Mailath, and Samuelson (2004). To be consistent with this tradition, we feel compelled to shed some light on continuous-time games with multiple types, and so we extend our recursive characterization of public sequential equilibrium to this case, characterizing the properties that the reputation vector and the large player’s continuation value must satisfy in equilibrium. With multiple types, however, we do not go to the next logical level to characterize equilibrium payoffs via partial differential equations, thus leaving this important extension for future research. Nevertheless, we use our recursive characterization of sequential equilibria under multiple behavioral types to prove an analogue of the Cripps, Mailath, and Samuelson (2004) result that the reputation effect is a short-lived phenomenon, that is, eventually the small players learn when they are facing a normal type. The counterpart of the Fudenberg–Levine bounds on the equilibrium payoffs of the large type as he becomes patient, for continuous-time games with multiple types, is shown in Faingold (2008). Finally, to provide a more complete analysis, we also address continuoustime games with Poisson signals and extend many of our results to those games. However, as we know from Abreu, Milgrom, and Pearce (1991), Fudenberg and Levine (2007, 2009), and Sannikov and Skrzypacz (2010), Brownian and Poisson signals have different informational properties in games with frequent actions or in continuous time. As a result, our equilibrium uniqueness result extends only to games in which Poisson signals are “good news.” Instead, when
778
E. FAINGOLD AND Y. SANNIKOV
the Poisson signals are “bad news,” multiple equilibria are possible even under the most restrictive conditions on payoffs that yield uniqueness in the Brownian and in the Poisson good news case. The rest of the paper is organized as follows. Section 2 presents an example in which a firm cares about its reputation concerning the quality of its product. Section 3 introduces the continuous-time model with Brownian signals and a single behavioral type. Section 4 provides the recursive characterization of public sequential equilibria. Section 5 examines the underlying complete information game. Section 6 presents the ordinary differential equation characterization when the sequential equilibrium is unique, along with the reputation result for equilibrium strategies and the sufficient conditions for uniqueness in terms of the primitives of the game. Section 7 extends the characterization to games with multiple sequential equilibria. Section 8 deals with multiple behavioral types. Section 9 considers games with Poisson signals and proves the equilibrium uniqueness and characterization result for the case in which the signals are good news. Section 10 concludes by drawing analogies between continuous and discrete time. 2. EXAMPLE: PRODUCT CHOICE Consider a firm that provides a service to a continuum of identical consumers. At each time t ∈ [0 ∞), the firm exerts a costly effort, at ∈ [0 1], which affects the quality of the service provided, and each consumer i ∈ [0 1] chooses a level of service to consume, bit ∈ [0 3]. The firm does not observe each consumer individually, but only the aggregate level of service in the population of consumers, denoted b¯ t . Likewise, the consumers do not observe the firm’s effort level; instead, they publicly observe the quality of the service, dXt , which is a noisy signal of the firm’s effort and follows dXt = at dt + dZt where (Zt )t≥0 is a standard Brownian motion. The unit price for the service is exogenously fixed and normalized to 1. The discounted profit of the firm and the overall surplus of consumer i are, respectively, ∞ ∞ re−rt (b¯ t − at ) dt and re−rt (bit (4 − b¯ t ) dXt − bit dt) 0
0
where r > 0 is the discount rate. Thus, the payoff function of the consumers features a negative externality: greater usage b¯ t of the service by other consumers leads each consumer to enjoy the service less. This feature is meant to capture a situation in which the quality of the service is adversely affected by congestion, as in the case of Internet service providers. Note that in every equilibrium of the continuous-time game, the consumers must optimize myopically, that is, they must act to maximize their expected
REPUTATION IN CONTINUOUS-TIME GAMES
779
instantaneous payoff. This is because the firm can only observe the aggregate consumption in the population, so no individual consumer can have an impact on future play. In the unique static Nash equilibrium, the firm exerts zero effort and the consumers choose the lowest level of service to consume. In Section 5, we show that the unique equilibrium of the continuous-time repeated game is the repeated play of this static equilibrium, irrespective of the discount rate r. Thus, it is impossible for the consumers to create intertemporal incentives for the firm to exert effort, even when the firm is patient and despite their behavior being statistically identified—that is, different effort levels induce different drifts for the quality signal Xt . This stands in sharp contrast to the standard setting of repeated games in discrete time, which is known to yield a great multiplicity of nonstatic equilibria when the nonmyopic players are sufficiently patient (Fudenberg and Levine (1994)). However, if the firm were able to commit to any effort level a∗ ∈ [0 1], this commitment would influence the consumers’ choices, and hence the firm could earn a higher profit. Indeed, each consumer’s choice, bi , would maximize his expected flow payoff, ¯ − 1), and in equilibrium all consumers would choose the same bi (a∗ (4 − b) ∗ level b = max{0 4 − 1/a∗ }. The service provider would then earn a profit of max{0 4 − 1/a∗ } − a∗ , and at a∗ = 1, this function achieves its maximal value of 2, the firm’s Stackelberg payoff. Thus, following Fudenberg and Levine (1992), it is interesting to explore the repeated game with reputation effects. Assume that at time zero the consumers believe that with probability p ∈ (0 1), the firm is a behavioral type, which always chooses effort level a∗ = 1, and with probability 1 − p, the firm is a normal type, which chooses at to maximize its expected discounted profit. What happens in equilibrium? The top panel of Figure 1 displays the unique sequential equilibrium payoff of the normal type as a function of the population’s belief p for different discount rates r. In equilibrium, the consumers continually update their posterior belief φt —the probability assigned to the behavioral type—using the observations of the public signal Xt . The equilibrium is Markovian in φt , which uniquely determines the equilibrium actions of the normal type (bottom left panel) and the consumers (bottom right panel). Consistent with the bounds on equilibrium payoffs obtained in Faingold (2008), which extends the reputation bounds of Fudenberg and Levine (1992) to continuous time, the computation shows that as r → 0, the large player’s payoff converges to the Stackelberg payoff of 2. We can also see from Figure 1 that the aggregate consumption in the ¯ increases toward the commitment level of 3 as the discount rate population, b, r decreases toward 0. While the normal type chooses action 0 for all levels of φt when r = 2, as r is closer to 0, his action increases toward the Stackelberg action a∗ = 1. However, the “imitation” of the behavioral type by the normal type is never perfect, even for very low discount rates.
780
E. FAINGOLD AND Y. SANNIKOV
FIGURE 1.—Equilibrium payoffs and actions in the product choice game.
3. THE REPUTATION GAME A large player faces a continuum of small players in a continuous-time repeated game. At each time i ∈ [0 ∞), the large player chooses an action at ∈ A def and each small player i ∈ I = [0 1] chooses an action bit ∈ B, where the action spaces A and B are compact subsets of a Euclidean space. The small players are anonymous: at each time t, the public information includes the aggregate distribution of the small players’ actions, b¯ t ∈ Δ(B), but not the action of any individual small player.6 We assume that the actions of the large player are not directly observable by the small players. Instead, there is a noisy public signal (Xt )t≥0 , whose evolution depends on the actions of the large player, the aggregate distribution of the small players’ actions, and noise. Specifically, dXt = μ(at b¯ t ) dt + σ(b¯ t ) dZt where (Zt )t≥0 is a d-dimensional Brownian motion, and the drift and volatility coefficients are determined by Lipschitz continuous functions μ : A × B → Rd 6 The aggregate distribution over the small players’ actions is the probability distribution b¯ t ∈ Δ(B) such that b¯ t (B ) = {i:bi ∈B } di for each Borel measurable subset B ⊆ B. t
REPUTATION IN CONTINUOUS-TIME GAMES
781
and σ : B → Rd×d , which are linearly extended to A × Δ(B) and Δ(B), respectively.78 For technical reasons, we assume that there is a constant c > 0 such that |σ(b)y| ≥ c|y| for all y ∈ Rd and b ∈ B. We write (Ft )t≥0 to designate the filtration generated by (Xt )t≥0 . The small players have identical preferences.9 The payoff of player i ∈ I depends on his own action, the aggregate distribution of the small players’ actions and the action, of the large player, ∞ re−rt h(at bit b¯ t ) dt 0
where r > 0 is the discount rate and h : A× B ×B → R is a continuous function, which is linearly extended to A × B × Δ(B). As is standard in the literature on imperfect public monitoring, we assume that the small players do not gather any information from their payoff flow beyond the information conveyed in the public signal. An important case is when h(at bit b¯ t ) is the expected payoff flow of player i, whereas his ex post payoff flow is privately observable and depends only on his own action, the aggregate distribution of the small players’ actions, and the flow, dXt , of the public signal. In this case, the ex post payoff flow of player i takes the form u(bit b¯ t ) dt + v(bit b¯ t ) · dXt so that (1)
¯ = u(bi b) ¯ + v(bi b) ¯ · μ(a b) ¯ h(a bi b)
as in the product choice game of Section 2. While this functional form is natural in many applications, none of our results hinges on it, so for the sake of generality we take the payoff function h : A × B × B → R to be a primitive and do not impose (1). While the small players’ payoff function is common knowledge, there is uncertainty about the type θ of the large player. At time t = 0, the small players believe that with probability p ∈ [0 1], the large player is a behavioral type ¯ = μ(a b) d b(b) ¯ Functions μ and σ are extended to distributions over B via μ(a b) and B ¯ ¯ = σ(b)σ(b) d b(b). ¯ σ(b)σ( b) B 8 The assumption that the volatility of (X)t≥0 is independent of the large player’s actions corresponds to the full support assumption that is standard in discrete-time repeated games. By Girsanov’s theorem (Karatzas and Shreve (1991, p. 191)), the probability measures over the sample paths of two diffusion processes with the same volatility coefficient but different bounded drifts are mutually absolutely continuous, that is, they have the same zero-probability events. Since the volatility of a continuous-time diffusion is effectively observable, we do not allow σ to depend on at . 9 All our results can be extended to a setting where the small players observe the same public signal, but have heterogeneous preferences. 7
782
E. FAINGOLD AND Y. SANNIKOV
(θ = b), and that with probability 1 − p, he is a normal type (θ = n). The behavioral type plays a fixed action a∗ ∈ A at all times, irrespective of history. The normal type plays strategically to maximize the expected value of his discounted payoff, ∞ re−rt g(at b¯ t ) dt 0
where g : A × B → R is a Lipschitz continuous function, which is linearly extended to A × Δ(B). A public strategy of the normal type of the large player is a random process (at )t≥0 with values in A and progressively measurable with respect to (Ft )t≥0 . Similarly, a public strategy of small player i ∈ I is a progressively measurable process (bit )t≥0 taking values in B. In the repeated game, the small players formulate a belief about the large player’s type following their observations of (Xt )t≥0 . A belief process is a progressively measurable process (φt )t≥0 taking values in [0 1], where φt designates the probability that the small players assign at time t to the large player being the behavioral type. DEFINITION 1: A public sequential equilibrium consists of a public strategy (at )t≥0 of the normal type of the large player, a public strategy (bit )t≥0 for each small player i ∈ I, and a belief process (φt )t≥0 such that at all times t ≥ 0 and after all public histories, the following conditions hold: (a) The strategy of the normal type of the large player maximizes his expected payoff: ∞ −rs ¯ Et re g(as bs ) dsθ = n 0
(b) The strategy of each small player i maximizes his expected payoff: ∞ −rs i ¯ re h(as bs bs ) dsθ = n (1 − φt )Et 0
∞
+ φ t Et
−rs
re
¯ h(a b bs ) dsθ = b ∗
i s
0
(c) Beliefs (φt )t≥0 are determined by Bayes’ rule given the common prior φ0 = p. A strategy profile that satisfies conditions (a) and (b) is called sequentially rational. A belief process (φt )t≥0 that satisfies condition (c) is called consistent. This definition can be simplified in two ways. First, because the small players have identical preferences, any strategy profile obtained from a public sequential equilibrium by a permutation of the small players’ labels remains a
REPUTATION IN CONTINUOUS-TIME GAMES
783
public sequential equilibrium. Given this immaterial indeterminacy, we shall work directly with the aggregate behavior strategy (b¯ t )t≥0 rather than with the individual strategies (bit )t≥0 . Second, in any public sequential equilibrium, the small players’ strategies must be myopically optimal, for their individual behavior is not observed by any other player in the game and it cannot influence the evolution of the public signal. Thus, with slight abuse of notation, we will say that a tuple (at b¯ t φt )t≥0 is a public sequential equilibrium when, for all t ≥ 0 and after all public histories, conditions (a) and (c) are satisfied as well as the myopic incentive constraint b ∈ arg max(1 − φt )h(at b b¯ t ) + φt h(a∗ b b¯ t ) b ∈B
∀b ∈ supp b¯ t
Finally, we make the following remarks concerning the definition of strategies and the solution concept: REMARK 1: Since the aggregate distribution over the small players’ actions is publicly observable, the above definition of public strategies is somewhat nonstandard in that it requires the behavior to depend only on the sample path of (Xt )t≥0 . Notice, however, that in our game, this restricted definition of public strategies incurs no loss of generality. For a given strategy profile, the public histories along which there are observations of b¯ t that differ from those on the path of play correspond to deviations by a positive measure of small players. Since the play that follows such joint deviations is irrelevant for equilibrium incentives, our restricted definition of public strategies does not alter the set of public sequential equilibrium outcomes. REMARK 2: The framework can be extended to accommodate public sequential equilibria in mixed strategies. A mixed public strategy of the large player is a random process (a¯ t )t≥0 progressively measurable with respect to (Ft )t≥0 with values in Δ(A). To accommodate mixed strategies, the payoff func¯ and h(· bi b) ¯ and the drift μ(· b) ¯ are linearly extended to Δ(A). tions g(· b) Because there is a continuum of anonymous small players, the assumption that each of them plays a pure strategy is without loss of generality. REMARK 3: For both pure- and mixed-strategy equilibria, the restriction to public strategies is without loss of generality. For pure strategies, it is redundant to condition a player’s current action on his private history, as every private strategy is outcome-equivalent to a public strategy. For mixed strategies, the restriction to public strategies is without loss of generality in repeated games with signals that have a product structure, as in the repeated games that we consider.10 To form a belief about his opponent’s private histories, in a game 10 A public monitoring technology has a product structure if each public signal is controlled by exactly one large player and the public signals corresponding to different large players are
784
E. FAINGOLD AND Y. SANNIKOV
with product structure, a player can ignore his own past actions because they do not influence the signal about his opponent’s actions. Formally, a mixed private strategy of the large player in our game would be a random process (at )t≥0 with values in A and progressively measurable with respect to a filtration (Gt )t≥0 , which is generated by the public signals and the large player’s private randomization. For any private strategy of the large player, an equivalent mixed public strategy can then be defined by letting a¯ t be the conditional distribution of at given Ft . Strategies (at )t≥0 and (a¯ t )t≥0 induce the same probability distributions over public signals and hence give the large player the same expected payoff conditional on Ft . 3.1. Relation to the Literature At this point, the reader may be wondering how exactly our continuous-time formulation relates to the canonical model of Fudenberg and Levine (1992). Besides continuous versus discrete time, a fundamental difference is that we assume the existence of a single behavioral type, while in the Fudenberg and Levine (1992) model, the type space of the large player is significantly more general, including multiple behavioral types and, possibly, arbitrary nonbehavioral types. Another distinction is that in our model, the uninformed players are infinitely-lived small anonymous players, whereas in each period of the canonical model, there is a single uninformed player who lives only in that period. But in this dimension, our formulation nests the canonical model, since ¯ is independent of b, ¯ our model is formally equivwhen the payoff h(a bi b) alent to one in which there is a continuous flow of uninformed players, each of whom lives only for an instant of time. In this case, bi = b¯ and, therefore, each individual uninformed player can influence the evolution of the public signal—both drift and volatility—just as in the canonical model. Moreover, the goal of our analysis is conceptually different from that of Fudenberg and Levine (1992). While their main result determines upper and lower bounds on the equilibrium payoffs of the normal type that hold in the limit as he gets arbitrarily patient, we provide a characterization of sequential equilibrium—both payoffs and behavior—under a fixed rate of discounting. For this reason, it is not surprising that in some dimensions, our assumptions are more restrictive than the assumptions in Fudenberg and Levine (1992). Although our focus is on equilibrium characterization for fixed discount rates, it is worth noting that the Fudenberg–Levine limit bounds on equilibrium payoffs have a counterpart in our continuous-time setting, as shown in Faingold (2008). For each a ∈ A, let B(a) designate the set of Nash equilibria conditionally independent given the action profile (cf. Fudenberg and Levine (1994, Section 5)). Since our reputation game has only one large player, this condition holds trivially.
REPUTATION IN CONTINUOUS-TIME GAMES
785
of the partial game between the small players given some action that is observationally equivalent to a, that is, ⎫ ⎧ ¯ = μ(a b) ¯ and ⎬ ⎨ ˜ ˜ ∃ a ∈ A such that μ( a b) def B(a) = b¯ ∈ Δ(B) : ¯ ∀b ∈ supp b¯ ˜ b b) b ∈ arg max h(a ⎭ ⎩ b ∈B
The following theorem is similar to Faingold (2008, Theorem 3.1).11 THEOREM 1—Reputation Effect: For every ε ∈ (0 1) and δ > 0, there exists r¯ > 0 such that in every public sequential equilibrium of the reputation game with prior p ∈ [ε 1 − ε] and discount rate r ∈ (0 r¯], the expected payoff of the normal type is bounded below by ¯ −δ min g(a∗ b)
¯ B(a∗ ) b∈
and bounded above by ¯ + δ max max g(a b) ¯ B(a) a∈A b∈
def ¯ The upper bound, g¯ s = maxa∈A maxb∈ ¯ B(a) g(a b), is the generalized Stackelberg payoff. It is the greatest payoff that a large player with commitment power can get, taking into account the limited observability of the large players’ actions and the incentives that arise in the partial game among the small players. Per∗ ¯ haps more interesting is the lower bound, minb∈ ¯ B(a∗ ) g(a b), which can be quite high in some games. Of particular interest are games in which it approximates def ¯ gs = sup min g(a b) ¯ B(a) a∈A b∈ ¯
the greatest possible lower bound. While the supremum above may not be at∗ ¯ tained, given ε > 0, we can find some a∗ ∈ A such that minb∈ ¯ B(a∗ ) g(a b) lies s ∗ within ε of g . Thus, if the behavioral type plays such a , there is a lower bound ¯ payoffs converging to gs − ε as r → 0.12 on equilibrium ¯ 11 Faingold (2008) examined reputation effects in continuous-time games with a general type space for the large player, as in Fudenberg and Levine (1992), and public signals that follow a controlled Lévy process. Although Faingold (2008) does not consider cross-section populations of small players, a straightforward extension of the proof of Faingold (2008, Theorem 3.1) can be used to prove Theorem 1 above. 12 When the support of the prior contains all possible behavioral types, as in Fudenberg and Levine (1992) and Faingold (2008), there is a lower bound on the equilibrium payoffs of the normal type which converges exactly to gs . In contrast, when there is only a finite set of behavioral types (e.g., as in the current paper and¯ in Cripps, Mailath, and Samuelson (2004)), the lower bound maybe strictly less than gs . ¯
786
E. FAINGOLD AND Y. SANNIKOV
It is, therefore, natural to examine when the reputation bounds, gs and g¯ s , ¯ actions coincide. A wedge may arise for two reasons: (i) there may be multiple ¯ and (ii) the that are observationally equivalent to some action a, under some b, partial game among the small players may have multiple static Nash equilibria. Recall that also in Fudenberg and Levine (1992), the upper and lower bounds may be different. However, in their setting, the indeterminacy is less severe, for in their model there is a single myopic player in each period and hence (ii) is just the issue of multiplicity of optima in a single-agent programming problem—a less severe problem than the issue of multiplicity of Nash equilibria in a game. Indeed, Fudenberg and Levine (1992) showed that when the action sets are finite, the upper and lower bounds must be equal under the following assumptions: ¯ • The actions of the large player are identified, that is, there do not exist a, ¯ ¯ ¯ ¯ b) and (a¯ b) generate the a¯ ∈ Δ(A), with a¯ = a¯ , and b ∈ Δ(B) such that (a same distribution over public signals. • The payoff matrix of the small players is nondegenerate, that is, there do ¯ = h(· b) and b = b. ¯ 13 not exist b ∈ B and b¯ ∈ Δ(B) such that h(· b) However, in our setting these assumptions do not generally imply g¯ s = gs , due ¯ to the externality across the small players that we allow. 4. THE STRUCTURE OF PUBLIC SEQUENTIAL EQUILIBRIUM This section develops a recursive characterization of public sequential equilibria which is used through the rest of the paper. Recall that in a public sequential equilibrium, beliefs must be consistent with the public strategies and the strategies must be sequentially rational given beliefs. For the consistency of beliefs, Proposition 1 presents equation (2), which describes how the small players’ posterior evolves with the public signal (Xt )t≥0 . The sequential rationality of the normal type can be verified by examining the evolution of his continuation value, that is, his expected future discounted payoff given the history of public signals up to time t. First, Proposition 2 presents a necessary and sufficient condition for a random process (Wt )t≥0 to be the process of continuation values of the normal type. Then Proposition 3 characterizes sequential rationality using a condition that is connected to the law of motion of (Wt )t≥0 . Propositions 2 and 3 are analogous to Propositions 1 and 2 of Sannikov (2007). 13 To be precise, in Fudenberg and Levine (1992) the upper and lower bounds are slightly different from g¯ s and gs . Since they assumed that the support of the prior contains all possible behavioral types, the¯myopic players never play a weakly dominated action. Accordingly, they defined B0 (a) to be the set of all undominated actions b¯ which are a best reply to some action that ¯ Then the definitions of the upper and lower bounds is observationally equivalent to a under b. are similar to those of g¯ s and gs , but with B0 (a) replacing B(a). This turns out to be crucial for ¯ their proof that the upper and lower bounds coincide under the identifiability and nondegeneracy conditions above.
REPUTATION IN CONTINUOUS-TIME GAMES
787
We begin with Proposition 1, which characterizes the stochastic evolution of the small players’ posterior beliefs.14 PROPOSITION 1—Belief Consistency: Fix the prior p ∈ [0 1] on the behavioral type. A belief process (φt )t≥0 is consistent with a public strategy profile (at b¯ t )t≥0 if and only if φ0 = p and (2)
dφt = γ(at b¯ t φt ) · σ(b¯ t )−1 (dXt − μφt (at b¯ t ) dt)
¯ φ) ∈ A × Δ(B) × [0 1], where for each (a b ¯ −1 (μ(a∗ b) ¯ − μ(a b)) ¯ ¯ φ) = φ(1 − φ)σ(b) γ(a b def
¯ = φμ(a∗ b) ¯ + (1 − φ)μ(a b) ¯ μφ (a b) def
PROOF: The strategy of each type of the large player induces a probability measure over the paths of the public signal (Xt )t≥0 . From Girsanov’s theorem, we can find the ratio ξt between the likelihood that a path (Xs ; s ∈ [0 t]) arises for type b and the likelihood that it arises for type n. This ratio is characterized by (3)
dξt = ξt ρt · dZsn
ξ0 = 1
t where ρt = σ(b¯ t )−1 (μ(a∗ b¯ t ) − μ(at b¯ t )) and Ztn = 0 σ(b¯ s )−1 (dXs − μ(as b¯ s ) ds) is a Brownian motion under the probability measure generated by the strategy of type n. Suppose that (φt )t≥0 is consistent with (at b¯ t )t≥0 . Then, by Bayes’ rule, the posterior after observing a path (Xs ; s ∈ [0 t]) is (4)
φt =
pξt pξt + (1 − p)
By Itô’s formula, (5)
dφt =
p(1 − p) 2p2 (1 − p) ξt2 ρt · ρt dt dξ − t (pξt + (1 − p))2 (pξt + (1 − p))3 2
= φt (1 − φt )ρt · dZtn − φ2t (1 − φt )ρt · ρt dt = φt (1 − φt )ρt · σ(b¯ t )−1 (dXt − μφt (at b¯ t ) dt) 14 Similar versions of the filtering equation (2) have been used in the literature on strategic experimentation in continuous time (cf. Bolton and Harris (1999), Keller and Rady (1999), and Moscarini and Smith (2001)). For a general treatment of filtering in continuous time, see Liptser and Shiryaev (1977).
788
E. FAINGOLD AND Y. SANNIKOV
which is equation (2). Conversely, suppose that (φt )t≥0 solves equation (2) with initial condition φ0 = p. Define ξt using expression (4), that is, ξt =
1 − p φt p 1 − φt
Then applying Itô’s formula to the expression above gives equation (3); hence ξt must equal the ratio between the likelihood that a path (Xs ; s ∈ [0 t]) arises for type b and the likelihood that it arises for type n. Thus, φt is determined by Bayes’ rule and the belief process is consistent with (at b¯ t )t≥0 . Q.E.D. Note that in the statement of Proposition 1, (at )t≥0 is the strategy that the small players believe that the normal type is following. Thus, when the normal type deviates from his equilibrium strategy, the deviation affects only the drift of (Xt )t≥0 , but not the other terms in equation (2). Coefficient γ of equation (2) is the volatility of beliefs: it reflects the speed with which the small players learn about the type of the large player. The definition of γ plays an important role in the characterization of public sequential equilibrium presented in Sections 6 and 7. The intuition behind equation (2) is as follows. If the small players are convinced about the type of the large player, then φt (1 − φt ) = 0 so they never change their beliefs. When φt ∈ (0 1), then γ(at b¯ t φt ) is larger, and learning is faster, when the noise σ(b¯ t ) is smaller or the drifts produced by the two types differ more. From the small players’ perspective, the noise driving equation (2), σ(b¯ t )−1 (dXt − μφt (at b¯ t ) dt), is a standard Brownian motion and their belief (φt )t≥0 is a martingale. From equation (5) we see that, conditional on the large player being the normal type, the drift of φt is nonpositive: in the long run, either the small players learn that they are facing the normal type or the normal type plays like the behavioral type. We turn to the analysis of the second important state variable in the strategic interaction between the large player and the small players: the continuation value of the normal type, that is, his future expected discounted payoff following a public history, under a given strategy profile. More precisely, given a strategy profile S = (as b¯ s )s≥0 , the continuation value of the normal type at time t is ∞ def Wt (S) = Et (6) re−r(s−t) g(as b¯ s ) dsθ = n t
The following proposition characterizes the law of motion of Wt (S). Throughout the paper, we write L to designate the space of Rd -valued progresT sively measurable processes (βt )t≥0 with E[ 0 |βt |2 dt] < ∞ for all 0 < T < ∞.
REPUTATION IN CONTINUOUS-TIME GAMES
789
PROPOSITION 2—Continuation Values: A bounded process (Wt )t≥0 is the process of continuation values of the normal type under a public strategy profile S = (at b¯ t )t≥0 if and only if for some (βt )t≥0 in L, (7)
dWt = r(Wt − g(at b¯ t )) dt + rβt · (dXt − μ(at b¯ t ) dt)
PROOF: First, note that Wt (S) is a bounded process by (6). Let us show that Wt = Wt (S) satisfies (7) for some (βt )t≥0 in L. Denote by Vt (S) the expected discounted payoff of the normal type conditional on the public information at time t, that is, ∞ def (8) re−rs g(as b¯ s ) dsθ = n Vt (S) = Et
0 t
=
re−rs g(as b¯ s ) ds + e−rt Wt (S)
0
Thus, (Vt )t≥0 is a martingale when the large player is the normal type. By the martingale representation theorem, there exists (βt )t≥0 in L such that (9)
dVt (S) = re−rt β t σ(b¯ t ) dZtn
where dZtn = σ(b¯ t )−1 (dXt − μ(at b¯ t ) dt) is a Brownian motion when the large player is the normal type. Differentiating (8) with respect to t yields (10)
dVt (S) = re−rt g(at b¯ t ) dt − re−rt Wt (S) dt + e−rt dWt (S)
Combining equations (9) and (10) yields (7), which is the desired result. Conversely, let us show that if (Wt )t≥0 is a bounded process satisfying (7), then Wt = Wt (S). Indeed, when the large player is the normal type, the process t Vt = re−rs g(as b¯ s ) ds + e−rt Wt 0
is a martingale under the strategy profile S = (at b¯ t ), because dVt = re−rt β t × σ(b¯ t ) dZtn by (7). Moreover, as t → ∞, the martingales Vt and Vt (S) converge because both e−rt Wt and e−rt Wt (S) tend to 0. Therefore, Vt = Et [V∞ ] = Et [V∞ (S)] = Vt (S) for all t as required.
⇒
Wt = Wt (S) Q.E.D.
Thus, equation (7) describes how Wt (S) evolves with the public history. Note that this equation must hold regardless of the large player’s actions before time t. This fact is used in the proof of Proposition 3 below, which deals with incentives.
790
E. FAINGOLD AND Y. SANNIKOV
We finally turn to conditions for sequential rationality. The condition for the small players is straightforward: they maximize their static payoff because a deviation of an individual small player cannot influence the future course of equilibrium play. The characterization of the large player’s incentive constraints is more complicated: the normal type acts optimally if he maximizes the sum of his current flow payoff and the expected change in his continuation value. PROPOSITION 3—Sequential Rationality: A public strategy profile (at b¯ t )t≥0 is sequentially rational with respect to a belief process (φt )t≥0 if and only if there exists (βt )t≥0 in L and a bounded process (Wt )t≥0 satisfying (7), such that for all t ≥ 0 and after all public histories, (11)
at ∈ arg max g(a b¯ t ) + βt · μ(a b¯ t )
(12)
b ∈ arg max φt h(a∗ b b¯ t ) + (1 − φt )h(at b b¯ t )
a ∈A
b ∈B
∀b ∈ supp b¯ t
PROOF: Consider a strategy profile (as b¯ s )s≥0 and an alternative strategy (a˜ s )s≥0 for the normal type. Denote by Wt the continuation payoff of the normal type at time t when he follows strategy (as )s≥0 after time t, while the population follows (b¯ s )s≥0 . If the normal type plays (a˜ s )s≥0 up to time t and then switches to (as )s≥0 , his expected payoff conditional on the public information at time t is given by t re−rs g(a˜ s b¯ s ) ds + e−rt Wt V˜t = 0
By Proposition 2, the continuation values (Wt )t≥0 follow equation (7) for some (βt )t≥0 in L. Thus, the above expression for V˜t implies d V˜t = re−rt (g(a˜ t b¯ t ) − Wt ) dt + e−rt dWt
= re−rt (g(a˜ t b¯ t ) − g(at b¯ t )) dt + βt · (dXt − μ(at b¯ t ) dt) and hence the profile (a˜ t b¯ t )t≥0 yields the normal type an expected payoff of ∞ ∞ ˜ ˜ ˜ ˜ ˜ W0 = E[V∞ ] = E V0 + d Vt = W0 + E d Vt 0
∞
= W0 + E 0
re−rt g(a˜ t b¯ t ) − g(at b¯ t )
¯ ¯ + βt · (μ(a˜ t bt ) − μ(at bt )) dt
0
REPUTATION IN CONTINUOUS-TIME GAMES
791
where the expectation is taken under the probability measure induced by (a˜ t b¯ t )t≥0 , so that (Xt )≥0 has drift μ(a˜ t b¯ t ). Suppose that (at b¯ t φt )t≥0 satisfies the incentive constraints (11) and (12). Then for every (a˜ t )t≥0 , we have W0 ≥ W˜ 0 and, therefore, the normal type must be sequentially rational at time t = 0. A similar argument can be used to show that the normal type is sequentially rational at all times t > 0 after all public histories. Finally, the small players must also be sequentially rational, since they are anonymous and hence myopic. Conversely, suppose that the incentive constraint (11) fails. Choose a strategy (a˜ t )t≥0 satisfying (11) for all t ≥ 0 and all public histories. Then W˜ 0 > W0 , and hence the large player is not sequentially rational at t = 0. Likewise, if (12) fails, then a positive measure of small players would not be maximizing their instantaneous expected payoffs. Since the small players are anonymous, and hence myopic, their strategies would not be sequentially rational. Q.E.D. We can summarize our characterization in the following theorem. THEOREM 2—Sequential Equilibrium: Fix a prior p ∈ [0 1] on the behavioral type. A strategy profile (at b¯ t )t≥0 and a belief process (φt )t≥0 form a public sequential equilibrium with continuation values (Wt )t≥0 for the normal type if and only if for some process (βt )t≥0 in L, the following conditions hold: (a) (φt )t≥0 satisfies (2) with initial condition φ0 = p. (b) (Wt )t≥0 is a bounded process satisfying (7), given (βt )t≥0 . (c) (at b¯ t )t≥0 satisfies the incentive constraints (11) and (12), given (βt )t≥0 and (φt )t≥0 . Thus, Theorem 2 provides a recursive characterization of public sequential equilibrium. Let E : [0 1] ⇒ R denote the correspondence that maps a prior probability on the behavioral type into the corresponding set of public sequential equilibrium payoffs of the normal type. An equivalent statement of Theorem 2 is that E is the greatest bounded correspondence such that a controlled process (φt Wt )t≥0 , defined by (2) and (7), can be kept in Graph(E ) by controls (at b¯ t βt )t≥0 which are required to satisfy (11) and (12).15 In Section 5, we apply Theorem 2 to the repeated game with prior p = 0, the complete information game. In Sections 6 and 7, we characterize E (p) for p ∈ (0 1). 5. EQUILIBRIUM DEGENERACY UNDER COMPLETE INFORMATION In this section we examine the structure of public equilibrium in the underlying complete information game, that is, the continuous-time repeated game 15 This means that the graph of any bounded correspondence with this property must be contained in the graph of E .
792
E. FAINGOLD AND Y. SANNIKOV
in which it is common knowledge that the large player is the normal type. We show that nontrivial intertemporal incentives cannot be sustained in equilibrium, regardless of the level of patience of the large player. THEOREM 3—Equilibrium Degeneracy: Suppose the small players are certain that they are facing the normal type (i.e., p = 0). Then, irrespective of the discount rate r > 0, in every public equilibrium of the continuous-time game, the large player cannot achieve a payoff outside the convex hull of his static Nash equilibrium payoffs, that is, ⎫ ⎧ ¯ a ∈ arg max g(a b) ⎬ ⎨ a ∈A ¯ : E (0) = co g(a b) ¯ ∀b ∈ supp b¯ ⎭ ⎩ b ∈ arg max h(a b b) b ∈B
Here is the idea behind this result. To give incentives to the large player to play an action that results in a greater payoff than in static Nash equilibrium, his continuation value must respond to the public signal (Xt )t≥0 . But when the continuation value reaches the greatest payoff among all public equilibria of the repeated game, such incentives cannot be provided. In effect, if the large player’s continuation value were sensitive to the public signal when his continuation value equals the greatest public equilibrium payoff, then with positive probability, the continuation value would escape above this upper bound, and this is not possible. Therefore, at the upper bound, the continuation value cannot be sensitive to the public signal, hence the large player must be playing a myopic best reply there. This implies that at the upper bound, the flow payoff of the large player is no greater than the greatest static Nash equilibrium payoff. Moreover, another necessary condition to prevent the continuation value from escaping is that the drift of the continuation value process must be less than or equal to zero at the upper bound. But since this drift is proportional to the difference between the continuation value and the flow payoff, it follows that the greatest public equilibrium payoff must be no greater than the flow payoff at the upper bound, which, as we have argued above, must be no greater than the greatest static equilibrium payoff. PROOF OF THEOREM 3: Let v¯ be the greatest Nash equilibrium payoff of the large player in the static complete information game. We will show that it is impossible to achieve a payoff greater than v¯ in any public equilibrium of the continuous-time game. (The proof for the least Nash equilibrium payoff is similar and therefore is omitted.) Suppose there was a public equilibrium (at b¯ t )t≥0 with continuation values (Wt )t≥0 for the normal type in which the ¯ By Proposilarge player’s expected discounted payoff, W0 , was greater than v. tions 2 and 3, for some (βt )t≥0 in L, the large player’s continuation value must satisfy dWt = r(Wt − g(at b¯ t )) dt + rβt · (dXt − μ(at b¯ t ) dt)
REPUTATION IN CONTINUOUS-TIME GAMES
793
where at maximizes g(a b¯ t ) + βt · μ(a b¯ t ) over all a ∈ A and b¯ t maximizes ¯ def = W0 − v¯ > 0. h(at b b¯ t ) over all b ∈ Δ(B). Let D ¯ CLAIM 1: There exists c > 0 such that, so long as Wt ≥ v¯ + D/2, either the drift ¯ of Wt is greater than r D/4 or the norm of the volatility of Wt is greater than c. This claim is an implication of the following lemma, whose proof is relegated to Appendix A. LEMMA 1: For any ε > 0, there exists δ > 0 such that for all t ≥ 0 and after all public histories, |βt | ≥ δ whenever g(at b¯ t ) ≥ v¯ + ε. ¯ Indeed, letting ε = D/4 in this lemma yields a δ > 0 such that the norm of the volatility of Wt , which equals r|βt |, must be greater than or equal to ¯ ¯ and c = rδ whenever g(at b¯ t ) ≥ v¯ + D/4. Moreover, when g(at b¯ t ) < v¯ + D/4 ¯ ¯ Wt ≥ v¯ + D/2, then the drift of Wt , which equals r(Wt − g(at bt )), must be ¯ greater than r D/4, and this concludes the proof of the claim. ¯ = W0 − v¯ > 0, the claim above readily imSince we have assumed that D plies that Wt must grow arbitrarily large with positive probability, and this is a Q.E.D. contradiction since (Wt )t≥0 is bounded. While Theorem 3 has no counterpart in discrete time, it is not a result of continuous-time technicalities.16 The large player’s incentives to depart from a static best response become fragile when he is flexible to react to new information quickly. The foundations of this result are similar to the collapse of intertemporal incentives in discrete-time games with frequent actions, as in Abreu, Milgrom, and Pearce (1991) in a prisoners’ dilemma with Poisson signals, and in Sannikov and Skrzypacz (2007) and Fudenberg and Levine (2007) in games with Brownian signals. Borrowing intuition from these papers, suppose that the large player must hold his action fixed for an interval of time of length Δ > 0. Suppose that the large player’s equilibrium incentives to take the Stackelberg action are created through a statistical test that triggers an equilibrium punishment when the signal is sufficiently “bad.” A profitable deviation has a gain on the order of Δ; therefore, such deviation can be prevented only if it increases the probability of triggering punishment by at least O(Δ). Sannikov and Skrzypacz (2007) and Fudenberg and Levine (2007) showed that, under Brownian signals, the log-likelihood ratio for a test against any particular deviation is normally √ distributed and that a deviation shifts the mean of this distribution by O( Δ). Thus, a successful test against a deviation would 16
Fudenberg and Levine (1994) showed that in discrete-time repeated games with large and small players, often there are public perfect equilibria with payoffs above static Nash, albeit bounded away from efficiency.
794
E. FAINGOLD AND Y. SANNIKOV
FIGURE 2.—A statistical test to prevent a given deviation.
√ generate a false positive with probability of O( Δ). This probability, which reflects the value destroyed in each period by the punishment, is disproportionately large for small Δ compared to the value created during a period of length Δ. This intuition implies that in equilibrium, the large player cannot sustain a payoff above static Nash as Δ → 0. Figure 2 illustrates the densities of the log-likelihood ratio under the “recommended” action of the large player and a deviation, and the areas responsible for the large player’s incentives and for the false positives. The above result relies crucially on the assumption that the volatility of the public signal is independent of the large player’s actions. If the large player could influence the volatility, his actions would become publicly observable, since in continuous time, the volatility of a diffusion process is effectively observable. Motivated by this observation, Fudenberg and Levine (2007) considered discrete-time approximations of continuous-time games with Brownian signals, allowing the large player to control the volatility of the public signal. In a variation of the product choice game, when the volatility is decreasing in the large player’s effort level, they showed the existence of nontrivial equilibrium payoffs in the limit as the period length tends to zero.17 Finally, Fudenberg and Levine (2009) examined discrete-time repeated games with public signals drawn from multinomial distributions that depend on the length of the period. They assumed that for each profile of stationary strategies, the distribution over signals—properly normalized and embedded in continuous time—satisfies standard conditions for weak convergence to a Brownian motion with drift as the period length tends to zero.18 When the Brownian motion is approximated by a binomial process, they showed that the set of equilibrium payoffs of the discrete-time game approaches the degenerate equilibrium of the continuous-time game. By contrast, when the Brownian 17 They also found the surprising result that when the volatility of the Brownian signal is increasing in the large player’s effort, the set of equilibrium payoffs of the large player collapses to the set of static Nash equilibrium payoffs, despite the fact that in the continuous-time limit, the actions of the large player are effectively observable. 18 Namely, they satisfy the conditions of Donsker’s invariance principle (Billingsley (1999, Theorem 8.2)).
REPUTATION IN CONTINUOUS-TIME GAMES
795
motion is approximated by a trinomial process and the discount rate is low enough, they show that the greatest equilibrium payoff of the discrete-time model converges to a payoff strictly above the static Nash equilibrium payoff. 6. REPUTATION GAMES WITH A UNIQUE SEQUENTIAL EQUILIBRIUM In many interesting applications, including the product choice game of Section 2, the public sequential equilibrium is unique and Markovian in the small players’ posterior belief. That is, at each time t, the small players’ posterior belief, φt , uniquely determines the equilibrium actions at = a(φt ) and b¯ t = b(φt ), as well as the continuation value of the normal type, Wt = U(φt ), as depicted in Figure 3. This section presents general conditions for the sequential equilibrium to be unique and Markovian, and characterizes the equilibrium using an ordinary differential equation. First, we derive our characterization heuristically. By Theorem 2, in a sequential equilibrium (at b¯ t φt )t≥0 , the small players’ posterior beliefs, (φt )t≥0 , and the continuation values of the normal type, (Wt )t≥0 , evolve according to (13)
dφt = −|γ(at b¯ t φt )|2 /(1 − φt ) dt + γ(at b¯ t φt ) · dZtn
and (14)
dWt = r(Wt − g(at b¯ t )) dt + rβ t σ(b¯ t ) dZtn
for some random process (βt )t≥0 in L, where dZtn = σ(b¯ t )−1 (dXt −μ(at b¯ t ) dt) is a Brownian motion under the normal type.19 The negative drift in (13) re-
FIGURE 3.—The large player’s payoff in a Markov perfect equilibrium. 19 Equation (13) is just equation (2) rewritten from the point of view of the normal type, rather than the point of view of the small players. This explain the change of drift.
796
E. FAINGOLD AND Y. SANNIKOV
flects the fact that, conditional on the large player being the normal type, the posterior on the behavioral type must be a supermartingale. If we assume that the equilibrium is Markovian, then by Itô’s formula, the continuation value Wt = U(φt ) of the normal type must follow (15)
1 dU(φt ) = |γ(at b¯ t φt )|2 (U (φt ) − 2U (φt )/(1 − φt )) dt 2 + U (φt )γ(at b¯ t φt ) · dZ n t
assuming that the value function U : (0 1) → R is twice continuously differentiable. Thus, matching the drifts in (14) and (15) yields the optimality equation (16)
U (φ) =
2U (φ) 2r(U(φ) − g(a(φ) b(φ))) + 1−φ |γ(a(φ) b(φ) φ)|2
φ ∈ (0 1)
Then, to determine the Markovian strategy profile (at b¯ t ) = (a(φt ) b(φt )), we can match the volatility coefficients in (14) and (15): rβ t = U (φt )γ(at b¯ t φt ) σ(b¯ t )−1 Plugging this expression into the incentive constraint (11) and applying Theorem 2 yields (17)
(at b¯ t ) ∈ N (φt φt (1 − φt )U (φt )/r)
where N : [0 1] × R ⇒ A × Δ(B) is the correspondence defined by20 ⎧ ⎫ ¯ + z(μ(a∗ b) ¯ − μ(a b)) ¯ ⎪ a ∈ arg max g(a b) ⎪ ⎪ ⎪ ⎪ ⎪ a ∈A ⎪ ⎪ ⎪ ⎪ ⎨ ⎬
−1 ¯ ¯ ¯ · (σ( b)σ( b) ) μ(a b) def ¯ (18) N (φ z) = (a b) : ¯ ⎪ ⎪ b ∈ arg max φh(a∗ b b) ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ b ∈B ⎪ ⎪ ⎩ ⎭ ¯ ∀b ∈ supp b¯ + (1 − φ)h(a b b) Effectively, for each (φ z) ∈ [0 1] × R, correspondence N returns the set of Bayesian Nash equilibria of an auxiliary one-shot game in which the large player is a behavioral type with probability φ and the payoff of the normal type is perturbed by a “reputational term” weighted by z. In particular, N (φ 0) is the set of static Bayesian Nash equilibria when the prior on the behavioral type is φ. Our characterization of unique sequential equilibria, stated below as Theorem 4, is valid under Conditions 1 and 2 below. Condition 1 guarantees that 20 Note that N (φ z) also depends on the strategy a∗ of the behavioral type, although our notation does not make this dependence explicit.
REPUTATION IN CONTINUOUS-TIME GAMES
797
the volatility γ of beliefs, which appears in the optimality equation in the denominator of a fraction, is bounded away from zero across all reputation levels. Condition 2 ensures that correspondence N is nonempty and single-valued, so that equilibrium behavior is determined by condition (17). ¯ CONDITION 1: For each φ ∈ [0 1] and each Bayesian Nash equilibrium (a b) ∗ ¯ ¯ of the static game with prior φ, we have μ(a b) = μ(a b). Thus, under Condition 1, in every static Bayesian Nash equilibrium, the behavior of the normal type is statistically distinguishable from the behavioral type. Therefore, for the normal type to play either a∗ or some observationally equivalent action, he ought to be given intertemporal incentives. This rules out sequential equilibria in which the posterior beliefs settle in finite time. Note that when the flow payoff of the small players depends on the actions of the large player only through the public signal (cf. equation (1) and the discussion therein), Condition 1 becomes equivalent to the following simpler ¯ of the complete information condition: for every static Nash equilibrium (a b) ¯ This condition is similar to the noncredible commit¯ = μ(a∗ b). game, μ(a b) ment assumption of Cripps, Mailath, and Samuelson (2004), which we discuss in greater detail in Section 8. CONDITION 2: Either of the following conditions holds: (a) N is a nonempty single-valued correspondence that returns a mass-point distribution of small players’ actions for each (φ z) ∈ [0 1] × R. Moreover, N is Lipschitz continuous on every bounded subset of [0 1] × R. (b) The restriction of N to [0 1] × [0 ∞) is a nonempty single-valued correspondence that returns a mass-point distribution of small players’ actions for each (φ z) ∈ [0 1] × [0 ∞). Moreover, N is Lipschitz continuous on every bounded subset of [0 1] × [0 ∞), and the static Bayesian Nash equilibrium payoff of the normal type, g(N (φ 0)), is increasing in the prior φ. Condition 2(b) has practical importance, for it holds in many games in which Condition 2(a) fails. Indeed, the payoff of the normal type in (18), when adjusted by a negative reputational weight z, may fail to be concave even ¯ and μ(· b) ¯ are strictly concave for all b. ¯ In Section 6.2, we prowhen g(· b) vide conditions on the primitives of the game—stage-game payoffs, drift, and volatility—which are sufficient for Conditions 2(b). Note that Condition 2 depends on the action a∗ of the behavioral type: it may hold for some behavioral types but not others. However, to keep the notation concise, we have chosen not to index correspondence N by a∗ . Finally, Condition 1 is not necessary for equilibrium uniqueness. When Condition 2 holds but Condition 1 fails, the reputation game still has a unique public sequential equilibrium, which is Markovian and characterized by (13),
798
E. FAINGOLD AND Y. SANNIKOV
(16), and (17) up until the stopping time when the posterior first hits a value φ for which there is a static Bayesian equilibrium (aφ b¯ φ ) with μ(aφ b¯ φ ) = μ(a∗ b¯ φ ); from this time on, the posterior no longer updates and the behavior is (aφ b¯ φ ) statically.21 In particular, when the payoff flow of the small players depends on the large player’s action only through the public signal (cf. equation (1) and the discussion therein), then when Condition 1 fails and Condition 2 holds, for every prior p the unique public sequential equilibrium of the reputation game is the repeated play of the static Nash equilibrium of the complete information game, and the posterior is φt = p identically. THEOREM 4: Assume Conditions 1 and 2. Under Condition 2(a) (resp. Condition 2(b)), the correspondence of public sequential equilibrium payoffs of the normal type, E : [0 1] ⇒ R, is single-valued and coincides, on the interval (0 1), with the unique bounded solution (resp. unique bounded increasing solution) of the optimality equation: (19)
U (φ) =
2U (φ) 2r(U(φ) − g(N (φ φ(1 − φ)U (φ)/r))) + 1−φ |γ(N (φ φ(1 − φ)U (φ)/r) φ)|2
Moreover, at p = 0 and 1, E (p) satisfies the boundary conditions (20)
lim U(φ) = E (p) = g(N (p 0))
φ→p
and
lim φ(1 − φ)U (φ) = 0
φ→p
Finally, for each prior p ∈ [0 1], there exists a unique public sequential equilibrium, which is Markovian in the small players’ posterior belief: at each time t and after each public history, the equilibrium actions are determined by (17), the posterior evolves according to (13) with initial condition φ0 = p, and the continuation value of the normal type is Wt = U(φt ). The intuition behind Theorem 4 is similar to the idea behind the equilibrium degeneracy result under complete information (Theorem 3). With Brownian signals, it is impossible to create incentives to sustain a greater payoff than in a Markov perfect equilibrium, for otherwise, in a public sequential equilibrium that achieves the greatest difference W0 − U(φ0 ) > 0 across all priors φ0 ∈ [0 1] at time t = 0, the joint volatility of (φt Wt )t≥0 must be parallel to the slope of U(φt ), since Wt − U(φt ) cannot increase for any realization of dXt . It follows that rβ 0 σ(b¯ 0 ) = U (φ0 )γ(a0 b¯ 0 φ0 ) and hence, when N is single-valued, the equilibrium action profile played at time zero must equal that played in a Markov perfect equilibrium at reputation φ0 . The optimal21 Indeed, an argument similar to the proof of Theorem 3 can be used to show that when the prior is φ, the large player cannot achieve any payoff other than g(aφ b¯ φ ).
REPUTATION IN CONTINUOUS-TIME GAMES
799
ity equation (19) then implies that Wt − U(φt ) has a positive drift at time zero, which implies that with positive probability, Wt −U(φt ) > W0 −U(φ0 ) for some t > 0, and this is a contradiction. PROOF OF THEOREM 4: The proofs of existence and uniqueness of a bounded solution of the optimality equation (19) are presented in Appendix C, along with a number of intermediate lemmas. First, Proposition C.3 shows that under Conditions 1 and 2(a), the optimality equation has a unique bounded solution U : (0 1) → R. Then Proposition C.2 shows that U must satisfy the boundary conditions (39), which include (20). Finally, Proposition C.4 implies that under Conditions 1 and 2(b), the optimality equation has a unique increasing bounded solution U, which also satisfies the boundary conditions. We now show that for each prior p ∈ (0 1), there is no public sequential equilibrium in which the normal type receives a payoff different from U(p). Toward a contradiction, suppose that for some p ∈ (0 1) there is a public sequential equilibrium (at b¯ t φt )t≥0 in which the normal type receives payoff W0 > U(p). By Theorem 2, the small players’ belief process (φt )t≥0 follows (13), the continuation value of the normal type (Wt )t≥0 follows (14) for some (βt )t≥0 in L, and the equilibrium actions (at b¯ t )t≥0 satisfy the incentive constraints (11) and (12). Moreover, by Itô’s formula, the process (U(φt ))t≥0 foldef lows (15). Then, using (14) and (15), the process Dt = Wt − U(φt ), which starts at D0 > 0, has drift U (φt ) 2 U (φt ) ¯ ¯ − rD + rU(φt ) − rg(at bt ) + |γ(at bt φt )| t 1 − φt 2 rWt
and volatility rβ t σ(b¯ t ) − γ(at b¯ t φt ) U (φt ) CLAIM 2: There exists δ > 0 such that, so long as Dt ≥ D0 /2, either the drift of Dt is greater than rD0 /4 or the norm of the volatility of Dt is greater than δ. This claim is an implication of Lemma C.8 from Appendix C, which shows that for each ε > 0, we can find δ > 0, such that for all t ≥ 0 and after all public histories, either the drift of Dt is greater than rDt − ε or the norm of the volatility of Dt is greater than δ. Thus, letting ε = rD0 /4 in Lemma C.8 proves the claim. The proof of Lemma C.8 is relegated to Appendix C, but here we explain the main idea behind it. When the norm of the volatility of Dt is exactly zero, we have rβ t σ(b¯ t ) = γ(at b¯ t φt ) U (φt ), so by (11) and (12) we have at ∈ arg max rg(a b¯ t ) + U (φt )γ(at b¯ t φt ) σ −1 (b¯ t ) μ(a b¯ t ) a ∈A rβ
t
800
E. FAINGOLD AND Y. SANNIKOV
b ∈ arg max φt h(a∗ b b¯ t ) + (1 − φt )h(at b b¯ t ) b ∈B
∀b ∈ supp b¯ t
and hence (at b¯ t ) = N (φt φt (1 − φt )U (φt )/r). Then by (19), the drift of Dt must equal rDt . The proof of Lemma C.8 uses a continuity argument to show that for the drift of Dt to be below rDt − ε, the volatility of Dt must be uniformly bounded away from 0. Since we have assumed that D0 > 0, the claim above readily implies that Dt must grow arbitrarily large with positive probability, which is a contradiction since both Wt and U(φt ) are bounded processes. This contradiction shows that a public sequential equilibrium that yields the normal type a payoff greater than U(p) cannot exist. Similarly, it can be shown that no equilibrium can yield a payoff less than U(p). We turn to the construction of a sequential equilibrium that yields a payoff of U(p) to the normal type. Consider the stochastic differential equation (13) with (at b¯ t )t≥0 defined by (17). Since the function φ → γ(N (φ φ(1 − φ)U (φ)/r) φ) is Lipschitz continuous, this equation has a unique solution (φt )t≥0 with initial condition φ0 = p. We now show that (at b¯ t φt )t≥0 is a public sequential equilibrium in which Wt = U(φt ) is the process of continuation values of the normal type. By Proposition 1, the belief process (φt )t≥0 is consistent with (at b¯ t )t≥0 . Moreover, since Wt = U(φt ) is a bounded process with drift r(Wt − g(at b¯ t )) by (15) and (19), Proposition 2 implies that (Wt )t≥0 is the process of continuation values of the normal type under (at b¯ t )t≥0 . Thus, the process (βt )t≥0 , given by the representation of Wt in Proposition 2, must satisfy rβ t σ(bt ) = U (φt )γ(at bt φt ) . Finally, to see that the strategy profile (at b¯ t )t≥0 is sequentially rational under (φt )t≥0 , recall that (at b¯ t ) = N (φt φt (1 − φt )U (φt )/r), hence (21)
at = arg max rg(a b¯ t ) + U (φt )γ(at b¯ t φt ) σ(b¯ t )−1 μ(a b¯ t ) a ∈A rβ
t
b¯ t = arg max φt h(a∗ b b¯ t ) + (1 − φt )h(at b b¯ t ) b ∈B
and therefore sequential rationality follows from Proposition 3. We conclude that (at b¯ t φt )t≥0 is a public sequential equilibrium that yields a payoff of U(p) to the normal type. Finally, we show that in any public sequential equilibrium (at b¯ t φt )t≥0 , the equilibrium actions are uniquely determined by the small players’ belief by (17). Indeed, if (Wt )t≥0 is the process of continuation values of the normal type, then the pair (φt Wt ) must stay on the graph of U, because there is no public sequential equilibrium with continuation value different from U(φt ), as we have shown above. Therefore, the volatility of Dt = Wt − U(φt ) must be identi-
REPUTATION IN CONTINUOUS-TIME GAMES
801
cally zero, that is, rβ t σ(b¯ t ) = U (φt )γ(at b¯ t φt ) . Thus Proposition 3 implies condition (21), and therefore (at b¯ t ) = N (φt φt (1 − φt )U (φt )/r). Q.E.D. Theorem 4 is a striking result because it highlights equilibrium properties that hold in discrete time only in approximation, and also because it summarizes reputational incentives through a single variable z = φ(1 − φ)U (φ)/r. By contrast, in the standard discrete-time model, equilibrium behavior is not pinned down by the small players’ posterior belief and Markov perfect equilibria may fail to exist.22 However, as we explain in our conclusions in Section 10, we expect the characterization of Theorem 4 to capture the limit equilibrium behavior in games in which players can respond to new information quickly and information arrives in a continuous fashion.23 The uniqueness and characterization result shows that our continuous-time formulation provides a tractable model of reputation phenomena that can be valuable for applications, as we demonstrate by several examples below. The fact that the reputational incentives are summarized by a single variable is particularly useful for applications with multidimensional actions. The characterization of the unique sequential equilibrium by a differential equation, in addition to being tractable for numerical computation, is useful to derive qualitative conclusions about equilibrium behavior, both at the general level and for specific classes of games. First, in addition to the existence, uniqueness, and characterization of sequential equilibria, Theorem 4 provides a simple sufficient condition for the equilibrium value of the normal type to be monotonically increasing in reputation; namely, the condition that the stagegame Bayesian Nash equilibrium payoff of the normal type is increasing in the prior on the behavioral type. By contrast, in the canonical discrete-time model, it is not known under which conditions reputation has a positive value. Second, Theorem 4 implies that, under Conditions 1 and 2, in equilibrium the normal type never perfectly imitates the behavioral type. Since the equilibrium actions are determined by (17), if the normal type imitated the behavioral type perfectly at time t, then (a∗ b¯ t ) would be a Bayesian Nash equilibrium of the static game with prior φt , and this would violate Condition 1. Third, the characterization implies the following square root law of substitution between the discount rate and the volatility of the signals: COROLLARY 1: Under Conditions 1 and 2, multiplying the discount rate by a factor of α > 0 has the same effect on the equilibrium value of the normal type as √ rescaling the volatility matrix σ by a factor of α. 22 See Mailath and Samuelson (2006, pp. 560–566) for a discrete-time finite-horizon variation of the product-choice game in which no Markov perfect equilibrium exists. 23 We expect our methods to apply broadly to other continuous-time games with payoff-relevant state variables, such as the Cournot competition with mean-reverting prices of Sannikov and Skrzypacz (2007). In that model, the market price is the payoff-relevant state variable.
802
E. FAINGOLD AND Y. SANNIKOV
The proof follows directly from observing that the discount rate r and the volatility matrix σ enter the optimality equation only through the product rσσ . Fourth, the differential equation characterization of Theorem 4 is also useful to study reputational incentives in the limit as the large player gets patient. In Section 6.1 below, we use the optimality equation to prove a new general result about reputation effects at the level of equilibrium behavior (as opposed to equilibrium payoffs, as in Fudenberg and Levine (1992) and Faingold (2008)). Under Conditions 1 and 2, if the behavioral type plays the Stackelberg action and the public signals satisfy an identifiability condition, as the large player gets patient, the equilibrium action of the normal type approaches the Stackelberg action at every reputation level (Theorem 5). Finally, to illustrate how our characterization can be useful to study specific classes of games, we discuss a few examples: EXAMPLE —Signal Manipulation: An agent chooses a costly effort a1 ∈ [0 1], which affects two public signals, X1 and X2 . The agent also chooses a level of signal manipulation a2 ∈ [0 1], which affects only signal X1 . Thus, as in Holmstrom and Milgrom’s (1991) multitasking agency model, the agent’s hidden action is multidimensional and the signals are distorted by a Brownian motion. Specifically, dX1t = (a1t + a2t ) dt + dZ1t dX2t = a1t dt + σ dZ2t where σ ≥ 1, that is, the signal that cannot be manipulated is less informative. There is a competitive market of small identical principals, each of whom is willing to hire the agent at a spot wage of b¯ t , which equals the market expectation of the agent’s effort given the public observations of the signals. Thus, the principals do not care directly about the agent’s signal manipulation activity, but in equilibrium, manipulation can affect the principals’ behavior through their statistical inference of the agent’s effort. The agent’s cost of effort and manipulation is quadratic, so that his flow payoff is 1 b¯ t − (a21t + a22t ) 2 Finally, the behavioral type is the Stackelberg type, that is, he is committed to a∗ = (1 0) (full effort and no signal manipulation). The signal manipulation game satisfies Conditions 1 and 2 directly (both (a) and (b)), and hence its sequential equilibrium is unique and Markovian in reputation, and the value function of the normal type is increasing and characterized by the optimality equation. We can examine the agent’s intertemporal
REPUTATION IN CONTINUOUS-TIME GAMES
803
FIGURE 4.—Effort and manipulation in the signal manipulation game.
incentives using the characterization. In equilibrium, when the reputational weight is z ≥ 0, the agent will choose (a1 a2 ) to maximize a1 1 2 1 0 2 − (a1 + a2 ) + z [ 1 − a1 − a2 1 − a1 ] 2 0 1/σ 2 a2 over all (a1 a2 ) ∈ [0 1] × [0 1]. The first-order conditions are −a1 + z(1 − a1 − a2 ) + zσ −2 (1 − a1 ) = 0 −a2 + z(1 − a1 − a2 ) = 0 which yield (1 + 1/σ 2 )z + z 2 /σ 2 1 + (2 + 1/σ 2 )z + z 2 /σ 2 z a2 = 1 + (2 + 1/σ 2 )z + z 2 /σ 2 a1 =
As shown in Figure 4, when the reputational weight is close to zero (as in equilibrium when φ is near 0 or 1), the agent exerts low effort and engages in nearly zero manipulation. As the reputational weight z grows, the agent’s effort increases monotonically and converges to maximal effort a∗1 = 1, while the manipulation action is single-peaked and approaches zero manipulation. In Figure 4, we can also see the effect of the informativeness of the nonmanipulable signal: as σ increases, so that the nonmanipulable signal becomes noisier,
804
E. FAINGOLD AND Y. SANNIKOV
the agent’s equilibrium effort decreases and the amount of signal manipulation increases at every level of the reputational weight z > 0. The intuition is that when the volatility of the nonmanipulable signal increases, it becomes cheaper to maintain a reputation by engaging in signal manipulation relative to exerting true effort. Indeed, a greater σ implies that the small players update their beliefs by less when they observe unexpected changes in the nonmanipulable signal X2 ; on the other hand, irrespective of σ, the actions a1 and a2 are perfect substitutes in terms of their impact on the informativeness of X1 . Finally, we emphasize that characterizations like this are impossible to achieve in discrete time, because equilibria typically fail to be Markovian and incentives cannot be characterized by a single reputational weight parameter. EXAMPLE —Monetary Policy: As in Cukierman and Meltzer (1986), at each time t ≥ 0, a central bank chooses a costly policy variable at ∈ [0 1], which is the target rate of monetary growth. This policy variable is unobservable to the “population” and affects the stochastic evolution of the aggregate price level, Pt , as Pt = exp Xt
where
dXt = at dt + σ dZt
At each time t, the population formulates a rational expectation aet of the current rate of money growth, given their past observations of X. In reduced form, the behavior of the population gives rise to a law of motion of the aggregate level of employment nt (in logarithmic scale), as (22)
dnt = κ(n¯ − nt ) dt + ς(dXt − aet dt)
where n¯ is the long-run level of employment. Thus, the residual change in employment is proportional to the unanticipated inflation, as in a Phillips curve. The central bank cares about stimulating the economy, but inflation is costly to society; specifically, the average discounted payoff of the central bank is ∞ a2t −rt (23) dt αnt − re 2 0 where α > 0 is the marginal rate of substitution between stimulus and inflation. To calculate the expected payoff of the central bank, first note that (22) implies t −κt −κt ¯ −e )+ς eκ(s−t) (dXs − aes ds) nt = n0 e + n(1 0
Plugging into (23) yields the expression for the central bank’s discounted payoff, r r + n¯ 1 − n0 r +κ r+κ t ∞ a2t −rt κ(s−t) e dt + ας re e (dXs − as ds) − 2 0 0
REPUTATION IN CONTINUOUS-TIME GAMES
805
which, by integration-by-parts, equals ∞ r r ας a2t −rt e + n¯ 1 − + (dXt − at dt) − dt re n0 r +κ r +κ r +κ 2 0 Thus, modulo a constant, the expected flow payoff of the central bank is ας a2 (at − aet ) − t r +κ 2 In a static world, the unique Nash equilibrium would have the central bank ας targeting an inflation rate of r+κ > 0. By contrast, if the central bank had the credibility to commit to a rule, it would find it optimal to commit to a zero inflation target. Indeed, under commitment, the population would anticipate the target, and hence the term at − aet would be identically zero and the central bank would be left with minimizing the cost of inflation.24 Consider now the reputation game with a behavioral type committed to a∗ = 0. To study the central bank’s incentives, consider his maximization problem in the definition of correspondence N . When the reputational weight is z, the central bank chooses a ∈ [0 1] to maximize αςa a 2 zaa − − 2 r +κ 2 σ
over all a ∈ [0 1]
and, therefore, sets a=
ας (r + κ)(1 + z/σ 2 )
Thus, when the reputational weight z is near zero (which happens in equilibrium when φ is close to 0 or 1), the central bank succumbs to its myopic incenας tive to inflate prices and sets the target rate around r+κ ; as the reputational weight grows, the target inflation decreases monotonically and approaches zero asymptotically. Moreover, in equilibrium, the central bank sets a higher target inflation rate at each reputational weight when (i) it values stimulus more (i.e., α is greater), (ii) the effect of unanticipated inflation on employment takes longer to disappear (i.e., κ is smaller), (iii) changes in employment are more sensitive to unexpected inflation (i.e., ς is greater), or (iv) the fluctuations in the aggregate price level are more volatile (i.e., σ is greater). EXAMPLE —Product Choice: The repeated product choice game of Section 2 satisfies Conditions 1 and 2 (both (a) and (b)), and hence its sequential equilibrium is unique, is Markovian, and the equilibrium value function of 24 This discrepancy between the Nash and the Stackelberg outcomes is an instance of the familiar time consistency problem in macroeconomic policy studied by Kydland and Prescott (1977).
806
E. FAINGOLD AND Y. SANNIKOV
the normal type, U, is increasing in reputation. For each (φ z) ∈ [0 1] × R, the unique profile (a(φ z) b(φ z)) ∈ N (φ z) is a(φ z) = b(φ z) =
0 z ≤ 1, 1 − 1/z z > 1, 0 φ + (1 − φ)a(φ z) ≤ 1/4, 4 − (φ + (1 − φ)a(φ z))−1 φ + (1 − φ)a(φ z) > 1/4.
The particular functional form we have chosen for the product choice game is not important to satisfy Conditions 1 and 2. More generally, consider the case in which the quality of the product still follows25 dXt = at dt + dZt but the flow profit of the firm and the flow surplus of the consumers take the more general form b¯ t − c(at ) and
v(bit b¯ t ) dXt − bit dt
respectively, where c(at ) is the cost of effort and v(bit b¯ t ) is the consumer’s valuation of the quality of the product. Assume that c and v are twice continuously differentiable and satisfy the standard conditions c > 0, c ≥ 0, v1 > 0, and v11 ≤ 0, where the subscripts denote partial derivatives. In addition, assume that (24)
v12 < −v11
so that either there are negative externalities among the consumers (i.e., v12 < 0 as in the example above), or there are positive, but weak, externalities among them. Then Conditions 1 and 2(b) are satisfied and the conclusion of Theorem 4 still applies. An example in which condition (24) fails is analyzed in Section 7. Finally, note that condition (24) arises naturally in the important case in which there is a continuous flow of short-lived consumers (as in Fudenberg and Levine (1992), Cripps, Mailath, and Samuelson (2004), and Faingold (2008)) instead of a cross-section population of small long-lived consumers (cf. the discussion in Section 3.1). In fact, when there is a single short-lived consumer at each time t, v is independent of b¯ and, therefore, (24) reduces to the condition v11 < 0. 25 When the drift is independent of the aggregate strategy of the small players and is monotonic in the large player’s action, imposing a linear drift is just a normalization.
REPUTATION IN CONTINUOUS-TIME GAMES
807
6.1. Reputation Effects at the Behavioral Level (as r → 0) With the equilibrium uniqueness result in place, we can examine the impact of the large player’s patience on the equilibrium strategies of the large player and the population of small players. In Theorem 5 below, we present a reputation effects result that is similar in spirit to Theorem 1 but concerns equilibrium behavior rather than equilibrium payoffs. Define def
N = {(a b φ) : ∃z ∈ R such that (a b) ∈ N (φ z)} We need the following assumption: CONDITION 3: The following two conditions are satisfied: (a) ∃K > 0 such that |μ(a∗ b) − μ(a b)| ≥ K|a∗ − a| ∀(a b) in the range of N . (b) ∃R > 0 such that |b∗ −b| ≤ R(1 −φ)|a∗ −a| ∀(a b φ) ∈ N ∀b∗ ∈ B(a∗ ). Condition 3(a) is an identifiability condition: it implies that at every action profile in the range of N , the action of the normal type can be statistically distinguished from a∗ by estimating the drift of the public signal. Condition 3(b) is a mild technical condition. To wit, the upper hemicontinuity of N implies that for any convergent sequence (an bn φn )n∈N such that (an bn ) ∈ N (φn zn ) for some zn , if either lim an = a∗ or lim φn = 1, then lim bn = b∗ ∈ B(a∗ ); Condition 3(b) strengthens this continuity property by requiring the modulus of continuity to be proportional to (1 − φn )|a∗ − an |. In Section 6.2 below, we provide sufficient conditions for Condition 3 (as well as for Conditions 1 and 2) in terms of the primitives of the game. Recall that the Stackelberg payoff is g¯ s = max max g(a b) a∈A b∈B(a)
∗
Say that a is a Stackelberg action if it attains the Stackelberg payoff, that is, g¯ s = maxb∈B(a∗ ) g(a∗ b). Finally, under Conditions 1 and 2, we write (ar (φ) br (φ)) to designate the unique Markov perfect equilibrium when the discount rate is r and, likewise, we write Ur (φ) for the equilibrium value of the normal type. THEOREM 5: Assume Conditions 1, 2, and 3, and that a∗ is the Stackelberg action. Let {b∗ } = B(a∗ ). Then, for each φ ∈ (0 1), lim Ur (φ) = g¯ s = g(a∗ b∗ ) and r→0
lim(ar (φ) br (φ)) = (a∗ b∗ ) r→0
The rate of convergence above is not uniform in the reputation level: as φ approaches 0 or 1, the convergence gets slower and slower. The proof of Theorem 5, presented in Appendix C.5, uses the differential equation characterization of Theorem 4 as well as the payoff bounds of Theorem 1.
808
E. FAINGOLD AND Y. SANNIKOV
6.2. Primitive Sufficient Conditions To illustrate the applicability of Theorems 4 and 5 we provide sufficient conditions for Conditions 1, 2, and 3 in terms of the basic data of the game— stage-game payoff functions, drift, and volatility. The conditions are stronger than necessary, but are easy to check and have a transparent interpretation. To ease the exposition, we focus on the case in which the actions and the public signal are one dimensional.26 We use subscripts to denote partial derivatives. PROPOSITION 4: Suppose A, B ⊂ R are compact intervals, a∗ = max A, and the public signal is one dimensional. If g : A × B → R, h : A × B × B → R, and μ : A×B → R are twice continuously differentiable, σ : B → R\{0} is continuous, ¯ ∈ A × B × B, and for each (a b b) ¯ ¯ < 0, (h22 + h23 )(a b ¯ b) ¯ < 0, (g11 (h22 + h23 ) − (a) g11 (a b) < 0, h22 (a b b) ¯ ¯ g12 h12 )(a b b) > 0, ¯ 2 (a∗ b ¯ b) ¯ − h2 (a b ¯ b)) ¯ ≥ 0, (b) g2 (a b)(h ¯ ¯ (c) μ1 (a b) > 0, μ11 (a b) ≤ 0, then Conditions 2(b) and 3 are satisfied. If, in addition, (d) g1 (a∗ b∗ ) < 0, where {b∗ } = B(a∗ ), then Condition 1 is also satisfied. Evidently, any conditions implying equilibrium uniqueness in the reputation game ought to be strong conditions, as they must, at least, imply equilibrium uniqueness in the stage game. Interestingly, Proposition 4 shows that in our continuous-time framework, such conditions are not much stronger than standard conditions for equilibrium uniqueness in static games. Indeed, (a) and (b) are standard sufficient conditions for the static-game analogue of Condition 2(b), so the only extra condition that comes from the reputation dynamics is (c)—a monotonicity and concavity assumption on the drift which is natural in many settings. We view this as the main reason why our continuous-time formulation is attractive from the applied perspective. Many interesting applications, such as the product choice game of Section 2 and the signal manipulation and monetary policy games presented after Theorem 4, satisfy the conditions of Proposition 4. For Condition 2(b), first we ensure that the best-reply correspondences are single-valued by assuming that the payoffs are strictly concave in their own action, that is, g11 < 0 and h22 < 0, and that (h22 + h23 )(a b b) < 0 for all (a b). The latter condition guarantees that the payoff complementarity between the actions of the small players is not too strong, ruling out coordination 26 In models with multidimensional actions/signals, the appropriate versions of conditions (a)– (d) below are significantly more cumbersome. This is reminiscent of the analysis of equilibrium uniqueness in one-shot games and has nothing to do with reputation dynamics.
REPUTATION IN CONTINUOUS-TIME GAMES
809
effects which can give rise to multiple equilibria. Next, the assumption that g11 (h22 + h23 ) − g12 h12 > 0 (along the diagonal of the small players’ actions) implies that the graphs of the best-reply functions intersect only once. This is also a familiar condition: together with g11 < 0 and h22 < 0, it implies the negative definiteness of the matrix of second-order derivatives of the payoff functions of the complete information one-shot game. In addition, condition (c) guarantees that the large player’s best reply remains unique when the rep ¯ ¯ − μ(a b))μ(a ¯ b), with z ≥ 0, is added to his utational term zσ(b)−2 (μ(a∗ b) static payoff. The monotonicity of the stage-game Bayesian Nash equilibrium payoff of the normal type follows from condition (b), which requires that whenever the small players can hurt the normal type by decreasing their action (so that g2 > 0), their marginal utility cannot be smaller when they are facing the behavioral type than when they are facing the normal type (otherwise they would have an incentive to decrease their action when their beliefs increase, causing a decrease in the payoff of the normal type). Condition 3(a) follows from μ1 > 0 and Condition 3(b) follows from h22 < 0 and h22 + h23 ≤ 0 (along the diagonal of the small players’ actions). Finally, for Condition 1, observe that under the assumption that μ1 > 0, there is no a = a∗ with μ(a b∗ ) = μ(a∗ b∗ ). Thus Condition 1 follows directly from assumption (d), which states that a∗ is not part of a static Nash equilibrium of the complete information game. The proof of Proposition 4 is presented in Appendix C.6. Under a set of assumptions similar to those of Proposition 4, Liu and Skrzypacz (2010) proved uniqueness of sequential equilibrium in discrete-time reputation games in which the short-run players observe only the outcomes of a fixed, finite number of past periods. They also found that in the complete information version of their repeated game, the only equilibrium is the repeated play of the stage-game Nash equilibrium, a result similar to our Theorem 3. REMARK 4: The assumption that a∗ is the greatest action in A is only for convenience. The conclusion of Proposition 4 remains valid under any a∗ ∈ A, provided condition (c) is replaced by ¯ > 0 μ1 (a b)
¯ − μ(a b))μ ¯ ¯ ¯ ∈ A × B (μ(a∗ b) ∀(a b) 11 (a b) ≤ 0
and condition (d) is replaced by ⎧ ∗ ⎨ < 0 a = max A, g1 (a∗ b∗ ) = 0 a∗ ∈ (min A max A), ⎩ > 0 a∗ = min A. REMARK 5: Among conditions (a)–(d), perhaps those that are most difficult to check are ¯ b) ¯ > 0 and (g11 (h22 + h23 ) − g12 h12 )(a b ¯ 2 (a∗ b ¯ b) ¯ − h2 (a b ¯ b)) ¯ ≥ 0 g2 (a b)(h
810
E. FAINGOLD AND Y. SANNIKOV
But assuming the other conditions in (a), a simple sufficient condition for the above is that ¯ ≤ 0 g12 (a b)
¯ b) ¯ ≥ 0 h12 (a b
¯ ≥0 and g2 (a b)
as is the case in the product choice game of Section 2. 7. GENERAL CHARACTERIZATION This section extends the analysis of Section 6 to environments with multiple sequential equilibria. When the correspondence N is not single-valued (so that Condition 2 is violated), the correspondence of sequential equilibrium payoffs, E , may also fail to be single-valued. Theorem 6 below characterizes E in this general case. To prove the general characterization, we maintain Condition 1, but replace Condition 2 by Condition 4: CONDITION 4: N (φ z) = ∅ ∀(φ z) ∈ [0 1] × R. This assumption is automatically satisfied in the mixed-strategy extension of the game (cf. Remark 2). However, to simplify the exposition, we have chosen not to deal with mixed strategies explicitly and impose Condition 4 instead. Consider the optimality equation of Section 6. When N is a multivalued correspondence, there can be multiple bounded functions which solve (25)
U (φ) =
2U (φ) 2r(U(φ) − g(a(φ) b¯ (φ))) + 1−φ |γ(a(φ) b¯ (φ) φ)|2
φ ∈ (0 1)
for different selections φ → (a(φ) b¯ (φ)) ∈ N (φ φ(1 − φ)U (φ)/r). An argument similar to the proof of Theorem 4 can be used to show that for each such solution U and each prior p, there exists a sequential equilibrium that achieves the payoff U(p) for the normal type. Therefore, a natural conjecture is that the correspondence of sequential equilibrium payoffs, E , contains all values between its upper boundary—the greatest solution of (25)— and its lower boundary—the least solution of (25). Accordingly, the pair (a(φ) b¯ (φ)) ∈ N (φ φ(1 − φ)U (φ)/r) should minimize the right-hand side of (25) for the upper boundary and maximize it for the lower boundary. However, the differential equation (26)
U (φ) = H(φ U(φ) U (φ))
φ ∈ (0 1)
REPUTATION IN CONTINUOUS-TIME GAMES
811
where H : [0 1] × R2 → R is given by (27)
H(φ u u ) def
=
2u 1−φ
+ min
¯ 2r(u − g(a b)) ¯ ∈ N (φ φ(1 − φ)u /r) : (a b) ¯ φ)|2 |γ(a b
may fail to have a solution in the classical sense. In general, N is upper hemicontinuous but not necessarily lower hemicontinuous, so the right-hand side of (26) can be discontinuous. Due to this technical difficulty, our analysis relies on a generalized notion of solution, called viscosity solution (cf. Definition 2 below), which applies to discontinuous equations.27 We show that the upper def boundary U(φ) = sup E (φ) is the greatest viscosity solution of the upper optidef mality equation (26) and that the lower boundary L(φ) = inf E (φ) is the least solution of the lower optimality equation, which is defined analogously, by replacing minimum by maximum in the definition of H. While in general viscosity solutions can fail to be differentiable, we show that the upper boundary U is continuously differentiable and has an absolutely continuous derivative. In particular, when N is continuous in a neighborhood of (φ φ(1 − φ)U (φ)) for some φ ∈ (0 1), so that H is also continuous in that neighborhood, any viscosity solution of (26) is a classical solution around φ. Otherwise, we show that U (φ) which exists almost everywhere since U is absolutely continuous, must take values in the interval between H(φ U(φ) U (φ)) and H ∗ (φ U(φ) U (φ)), where H ∗ is the upper semicontinuous (u.s.c.) envelope of H, that is, the least u.s.c. function which is greater than or equal to H everywhere. (Note that H is necessarily lower semicontinuous, i.e., H = H∗ .) DEFINITION 2: A bounded function U : (0 1) → R is a viscosity supersolution of the upper optimality equation if for every φ ∈ (0 1) and every twice continuously differentiable test function V : (0 1) → R, U∗ (φ) = V (φ) and ⇒
U∗ ≥ V
∗
V (φ) ≤ H (φ V (φ) V (φ))
A bounded function U : (0 1) → R is a viscosity subsolution if for every φ ∈ (0 1) and every twice continuously differentiable test function V : (0 1) → R, U ∗ (φ) = V (φ) and U ∗ ≤ V ⇒ 27
V (φ) ≥ H∗ (φ V (φ) V (φ))
For an introduction to viscosity solutions, refer to Crandall, Ishii, and Lions (1992).
812
E. FAINGOLD AND Y. SANNIKOV
A bounded function U is a viscosity solution if it is both a supersolution and a subsolution.28 Appendix D presents the complete analysis, which we summarize here. Propositions D.1 and D.2 show that U, the upper boundary of E , is a bounded viscosity solution of the upper optimality equation. Lemma D.3 then shows that U must be a continuously differentiable function with absolutely continuous derivative (so its second derivative exists almost everywhere (a.e.)), and hence U must solve the differential inclusion (28) a.e. U (φ) ∈ H(φ U(φ) U (φ)) H ∗ (φ U(φ) U (φ)) In particular, when H is continuous in a neighborhood of (φ U(φ) U (φ)), then U satisfies equation (26) in the classical sense in a neighborhood of φ. Finally, Proposition D.3 shows that U must be the greatest bounded solution of (28). We summarize our characterization in the following theorem. THEOREM 6: Assume Conditions 1 and 4, and that E is nonempty-valued. Then E is a compact- and convex-valued continuous correspondence whose upper and lower boundaries are continuously differentiable functions with absolutely continuous derivatives. Moreover, the upper boundary of E is the greatest bounded solution of the differential inclusion (28), and the lower boundary is the least bounded solution of the analogous differential inclusion, where maximum is replaced by minimum in the definition of H. To illustrate the case of multiple sequential equilibria, we provide two examples. EXAMPLE —Product Choice With Positive Externalities: This is a simple variation of the product-choice game of Section 2. Suppose the firm chooses effort level at ∈ [0 1], where a∗ = 1 is the action of the behavioral type, and that each consumer chooses a level of service bit ∈ [0 2]. The public signal about the firm’s effort follows dXt = at dt + dZt The payoff flow of the normal type is (b¯ t − at ) dt and each consumer i ∈ I receives the payoff flow bit b¯ t dXt − bit dt. Thus, the consumers’ payoff function features a positive externality: greater usage b¯ t of the service by other consumers allows each individual consumer to enjoy the service more. 28
This is equivalent to Definition 2.2 in Crandall, Ishii, and Lions (1992).
REPUTATION IN CONTINUOUS-TIME GAMES
813
FIGURE 5.—The upper boundary of E (p).
The unique Nash equilibrium of the static game is (0 0). The correspondence N (φ z) determines the action of the normal type uniquely by (29)
a=
0 if z ≤ 1, 1 − 1/z otherwise.
The consumers’ actions are uniquely b¯ = 0 only when (1 − φ)a + φa∗ < 1/2. When (1−φ)a+φa∗ ≥ 1/2, the partial game among the consumers, who face a coordination problem, has two pure Nash equilibria with b¯ = 0 and b¯ = 2 (and one mixed equilibrium when (1 − φ)a + φa∗ > 1/2). Thus, the correspondence N (φ z) is single-valued only on a subset of its domain. How is this multiplicity reflected in the equilibrium correspondence E (p)? Figure 5 displays the upper boundary of E (p) computed for discount rates r = 01, 02, and 05. The lower boundary for this example is identically zero, because the static game among the consumers has an equilibrium with b¯ = 0. For each discount rate, the upper boundary U is divided into three regions. In the region near φ = 0, where the upper boundary is displayed as a solid line, the correspondence φ → N (φ φ(1 − φ)U (φ)/r) is single-valued and U satisfies the upper optimality equation in the classical sense. In the region near φ = 1, where the upper boundary is displayed as a dashed line, the correspondence N is continuous and takes multiple values (two pure and one mixed). There, U also satisfies the upper optimality equation in the classical sense, with the small players’ action given by b¯ = 2. In the middle region, where the
814
E. FAINGOLD AND Y. SANNIKOV
upper boundary is shown as a dotted line, we have 2U (φ) 2r(U(φ) − 2 + a) + U (φ) ∈ 1−φ |γ(a 2 φ)|2 2U (φ) 2r(U(φ) − 0 + a) + 1−φ |γ(a 0 φ)|2 where a is given by (29) with z = φ(1 − φ)U (φ)/r, and 0 and 2 are the two pure values of b¯ that the correspondence N returns. In that range, the correspondence N (φ φ(1 − φ)U (φ)/r) is discontinuous in its arguments: if we lower U (φ) slightly, the equilibrium among the consumers with b¯ = 2 breaks down. These properties of the upper boundary follow from its characterization as the greatest solution of the upper optimality equation. EXAMPLE —Bad Reputation: This is a continuous-time version of the reputation game of Ely and Valimaki (2003) with noisy public signals (see also Ely, Fudenberg, and Levine (2008)). The large player is a car mechanic, who chooses the probability a1 ∈ [0 1] with which he replaces the engine on cars that need an engine replacement and the probability a2 ∈ [0 1] with which he replaces the engine on cars that need a mere tune-up. Thus, the stage-game action of the mechanic, a = (a1 a2 ), is two dimensional. Each small player (car owner), without the knowledge of whether his car needs an engine replacement or a tune-up, decides on the probability b ∈ [0 1] with which he brings the car to the mechanic. The behavioral type of mechanic—a bad behavioral type—replaces the engines of all cars, irrespective of which repair is more suitable, that is, a∗ = (1 1). Car owners observe noisy information about the number of engines replaced by the mechanic in the past: dXt = b¯ t (a1t + a2t ) dt + dZt The payoffs are given by ¯ = b(a ¯ 1 − a2 ) and g(a b)
¯ = b(a1 − a2 − 1/2) h(a b b)
Thus the normal type of mechanic is a good type in that he prefers to replace the engine only when it is needed. As for the car owners, they prefer to take their car to the mechanic only when they believe the mechanic is sufficiently honest, that is, a1 − a2 ≥ 1/2; otherwise, they prefer not to take the car to the mechanic at all. While this game violates Condition 1, we are able to characterize the set of public sequential equilibrium payoffs of the normal type via a natural extension of Theorem 6 (see Remark 6 below). The correspondence E is illustrated in Figure 6. The lower boundary of this correspondence, L, is identically 0. The
815
REPUTATION IN CONTINUOUS-TIME GAMES
FIGURE 6.—The correspondence E in the bad reputation game (for r = 04).
upper boundary, U, displayed as a solid line, is characterized by three regions: (i) for φ ≥ 1/2, U(φ) = 0, (ii) for φ ∈ [φ∗ 1/2], U(φ) = r log 1−φ , and (iii) for φ ∗ φ ≤ φ , U(φ) solves the upper optimality equation in the classical sense, U (φ) =
2U (φ) 2r(U(φ) − 1) + 1−φ φ2 (1 − φ)2
with boundary conditions U(0) = 1
U(φ∗ ) = r log
1 − φ∗ φ∗
and
U (φ∗ ) = −
r φ (1 − φ∗ ) ∗
¯ = ((1 0) 1).29 In particular, the which is attained by the action profile (a b) 1−φ upper boundary U is always below r log φ for φ < 1/2, as illustrated in Figure 6. For φ ≤ 1/2 the set N (φ φ(1 − φ)U (φ)/r) includes the profile ((1 0) 1) as well as the profiles ((a1 a2 ) 0) with a1 − a2 ≤ 1/(2(1 − φ)), and the former profile must be played on the upper boundary of the equilibrium set in this belief range.30 Moreover, for each prior φ < 1/2, there is a (non-Markovian) se29
This differential equation turns out to have a closed-form solution: √ (1 − φ∗ )φ (1+ 1+8r)/2 1 − φ∗ φ ∈ [0 φ∗ ] U(φ) = 1 − 1 − r log φ∗ φ∗ (1 − φ)
The cutoff√ φ∗ is pinned down by the condition U (φ∗ ) = −r/(φ∗ (1 − φ∗ )), which yields φ∗ = 5− 1+8r 1/(1 + e 4r ). 30 For φ ∈ [φ∗ 1/2], the slope U (φ) = −r/(φ(1 − φ)) is such that, at the profile ((1 0) 1) ∈ N (φ φ(1 − φ)U (φ)/r), the mechanic is indifferent among all actions (a1 0), a1 ∈ [0 1]. More-
816
E. FAINGOLD AND Y. SANNIKOV
quential equilibrium attaining the upper boundary in which ((1 0) 1) is played on the equilibrium path as long as the car owners’ posterior remains below 1/2 and the continuation value of the normal type is strictly positive.31 However, since r log 1−φ → 0 as r → 0 for all φ > 0, for all priors, the greatest equilibφ rium payoff of the normal type converges to 0 as r → 0. This result is analogous to Ely and Valimaki (2003, Theorem 1). REMARK 6: The characterization of Theorem 6 can be extended to games in which Condition 1 fails as follows. Denote ¯ : (a b) ¯ ∈ N (φ 0) and μ(a b) ¯ = μ(a∗ b)} ¯ V (φ) = co{g(a b) The set V (φ) can be attained by equilibria in which the normal type always “looks” like the behavioral type and his reputation stays fixed. For games in which Condition 1 holds, the correspondence V (φ) is empty; for the bad reputation example above, V (φ) = {0} for all φ Then, for any belief φ ∈ (0 1), the upper boundary, U(φ), is the maximum between max V (φ) and the greatest solution of the differential inclusion (28) that remains bounded both to the left and to the right of φ until the end of the interval (0 1) or until it reaches the correspondence V . The lower boundary is characterized analogously. For our bad reputation example, the upper boundary of E solves the differential inclusion on (0 1/2] and reaches the correspondence V at the belief level 1/2. REMARK 7: A caveat of Theorem 6 is that it assumes the existence of a public sequential equilibrium, although existence is not guaranteed in general unless we assume the availability of a public randomization device. Standard existence arguments based on fixed-point methods do not apply to our continuoustime games (even assuming finite action sets and allowing mixed strategies), because continuous time renders the set of partial histories at any time t uncountable.32 This is similar to the existence problem that arises in the context of subgame perfect equilibrium in extensive-form games with continuum action sets, as in Harris, Reny, and Robson (1995). Moreover, while it can be shown that the differential inclusion (28) is guaranteed to have a bounded solution over, the correspondence N is discontinuous at (φ φ(1 − φ)U (φ)/r) for all φ ∈ [φ∗ 1/2] and U satisfies the differential inclusion (28) with strict inequality. 31 In this equilibrium, the posterior on the behavioral type, φt , and the continuation values of the normal type, Wt , follow dφt = φt (1 − φt )(dXt − (φt + 1) dt) and dWt = r(Wt − 1) dt + φt (1 − φt )U (φt )(dXt − dt) up until the first time when (φt Wt ) hits the lower boundary; from then on, the equilibrium play follows the static equilibrium ((0 0) 0) and the posterior no longer updates. When φt is below φ∗ , the optimality equation implies that the pair (φt Wt ) remains at the upper boundary, but for φt ∈ [φ∗ 1/2], the differential inclusion, which is satisfied with strict inequality, implies that Wt eventually falls strictly below the upper boundary. 32 At a technical level, the problem is the lack of a topology on strategy sets under which expected discounted payoffs are continuous and strategy sets are compact.
REPUTATION IN CONTINUOUS-TIME GAMES
817
U : (0 1) → R under Conditions 1 and 4, this does not imply the existence of a sequential equilibrium, because a selection φ → (a(φ) b¯ (φ)) satisfying (25) may not exist when N is not connected-valued. However, if the model is suitably enlarged to allow for public randomization, then the existence of sequential equilibrium is restored. In the Supplementary Material (Faingold and Sannikov (2011)), we explain the formalism of public randomization in continuous time and demonstrate that a sequential equilibrium in publicly randomized strategies is guaranteed to exist under Conditions 1 and 4. It is interesting to note that also in Harris, Reny, and Robson (1995), public randomization is the key to obtain the existence of subgame perfect equilibrium. 8. MULTIPLE BEHAVIORAL TYPES In this section we consider reputation games with multiple behavior types. We extend the recursive characterization of sequential equilibrium of Section 4 and prove an analogue of Cripps, Mailath, and Samuelson’s (2004) result that reputation effects are a temporary phenomenon. These results lay the groundwork for future analysis of these games, and we leave the extension of the characterizations of Sections 6 and 7 for future research. Suppose there are K < ∞ behavioral types, where each type k ∈ {1 K} plays a fixed action a∗k ∈ A at all times, after all histories. Initially the small players believe that the large player is behavioral type k with probability pk ∈ K [0 1], so that p0 = 1 − k=1 pk > 0 is the prior probability on the normal type. All else is exactly as described in Section 3. To derive the recursive characterization of sequential equilibrium, the main difference from Section 4 is that now the small players’ belief is a vector in the K-dimensional simplex, ΔK . Accordingly, for each k = 0 K, we write φkt to designate the small players’ belief that the large player is of type k, and write φt = (φ0t φKt ). PROPOSITION 5—Belief Consistency: Fix a prior p ∈ ΔK . A belief process (φt )t≥0 is consistent with a strategy profile (at b¯ t )t≥0 if and only if φ0 = p and for each k = 0 K, (30)
dφkt = γk (at b¯ t φt ) · σ(b¯ t )−1 (dXt − μφt (at b¯ t ) dt)
¯ φ) ∈ A × Δ(B) × ΔK , where for each (a b ¯ φ) = φ0 σ(b) ¯ −1 (μ(a b) ¯ − μφ (a b)) ¯ γ0 (a b def
¯ φ) def ¯ −1 (μ(a∗ b) ¯ − μφ (a b)) ¯ = φk σ(b) γk (a b k ¯ def ¯ + = φ0 μ(a b) μφ (a b)
K k=1
¯ φk μ(a∗k b)
k = 1 K
818
E. FAINGOLD AND Y. SANNIKOV
PROOF: As in the proof of Proposition 1, the relative likelihood ξkt that a signal path arises (Xs ; s ∈ [0 t]) from the behavior of type k instead of the normal type is characterized by dξkt = ξkt ρkt · dZs0
ξk0 = 1
def def where ρkt = σ(b¯ t )−1 (μ(a∗k b¯ t ) − μ(at b¯ t )) and dZt0 = σ(b¯ t )−1 (dXt − μ(at b¯ t ) dt) is a Brownian motion under the normal type. By Bayes’ rule,
φkt =
pk ξkt K p0 + pk ξkt k=1
Applying Itô’s formula to this expression yields dφkt = p0 +
pk K
dξkt pk ξkt
k=1
−
K k =1
pk ξkt
p0 +
K
2 pk dξk t + (· · ·) dt
pk ξkt
k=1
where we do not need to derive the (· · ·) dt term because we know that (φkt )t≥0 is a martingale from the point of view of the small players. Since dZtφ = σ(b¯ t )−1 (dXt − μφt (at b¯ t ) dt) is a Brownian motion from the viewpoint of the small players, dφkt =
pk K
ξkt ρkt · dZtφ p0 + k=1 pk ξkt K pk ξkt φ − 2 pk ξk t ρk t · dZt K k =1 p0 + pk ξkt
= φkt ρkt −
k=1
K
φk t ρk t · dZtφ
k =1
= φkt σ(b¯ t )−1 (μ(a∗k b¯ t ) − μφt (at b¯ t )) · dZtφ which is the desired result.
Q.E.D.
REPUTATION IN CONTINUOUS-TIME GAMES
819
Proceeding toward the recursive characterization of sequential equilibrium, it can be readily verified that the characterization of the continuation value of the normal type, namely Proposition 2, remains valid under multiple behavior types. Also the characterization of sequential rationality given in Proposition 3 continues to hold, provided the small players’ incentive constraint (12) is replaced by (31)
b¯ t ∈ arg max φ0t h(at b b¯ t ) + b∈B
K
φkt h(a∗k b b¯ t )
∀b ∈ supp b¯ t
k=1
Thus we have the following analogue of Theorem 2: THEOREM 7—Sequential Equilibrium: Fix the prior p ∈ ΔK . A public strategy ¯ t≥0 and a belief process (φt )t≥0 form a sequential equilibrium with profile (at b) continuation values (Wt )t≥0 for the normal type if and only if there exists a random process (βt )t≥0 in L such that the following conditions hold: (a) (φt )t≥0 satisfies equation (30) with initial condition φ0 = p. (b) (Wt )t≥0 is a bounded process satisfying equation (7), given (βt )t≥0 . (c) (at b¯ t )t≥0 satisfy the incentive constraints (11) and (31), given (βt )t≥0 and (φt )t≥0 . It is beyond the scope of this paper to apply Theorem 7 to obtain characterizations similar to Theorems 4 and 6 under multiple behavioral types. However, to illustrate the applicability of Theorem 7, we use it to prove an analogue of Cripps, Mailath, and Samuelson’s (2004) result that in every equilibrium the reputation of the large player disappears in the long run when the large player is the normal type. The following condition extends Condition 1 to the setting with multiple behavioral types: CONDITION 1 : For each φ ∈ ΔK and each static Bayesian Nash equilibrium ¯ of the game with prior φ, we have μ(a b) ¯ ∈ ¯ : k = 1 K}. (a b) / co{μ(a∗k b) Note that when the flow payoff of the small players depends on the actions of the large player only through the public signal (cf. equation (1) and the discussion therein), then Condition 1 becomes equivalent to a simpler condi¯ of the complete information game, tion: for each static Nash equilibrium (a b) ∗ ¯ ¯ μ(a b) ∈ / co{μ(ak b) : k = 1 K}. The following result, whose proof is presented in Appendix E, is similar to Cripps, Mailath, and Samuelson (2004, Theorem 4), albeit under a different assumption. THEOREM 8: Under Condition 1 , in every public sequential equilibrium, limt→∞ φ0t = 1 almost surely under the normal type.
820
E. FAINGOLD AND Y. SANNIKOV
When there is a single behavioral type, Condition 1 cannot be dispensed with. If it fails, then for some prior there is a static Bayesian Nash equilibrium (BNE) in which the behavior of the normal type is indistinguishable from the behavioral type. Thus, the repeated play of this BNE is a sequential equilibrium of the reputation game in which the posterior belief never changes. While the conclusion of Theorem 8 is similar to the conclusion of Cripps, Mailath, and Samuelson (2004, Theorem 4), our assumptions are different. In general, the discrete-time analogue of Condition 1 —namely the requirement that in every static Bayesian Nash equilibrium, the distribution over signals induced by the normal type is not a convex combination of the distributions induced by the behavioral types—is neither stronger nor weaker than the following assumptions of Cripps, Mailath, and Samuelson (2004, Theorem 4): (i) The stage-game actions of the large player are identifiable. (ii) No behavioral type plays an action which is part of a static Nash equilibrium of the complete information game. (iii) The small players’ best reply to each behavioral type is unique. However, when there is a single behavioral type, Condition 1 is implied by conditions (i) and (ii) above. It is, therefore, surprising that our result does not require condition (iii), which is known to be a necessary assumption in the discrete-time setting of Cripps, Mailath, and Samuelson (2004). The reason why we can dispense with condition (iii) in our continuous-time framework is related to our equilibrium degeneracy result under complete information (Theorem 3). In discrete time, when conditions (i) and (ii) hold but condition (iii) fails, it is generally possible to construct sequential equilibria where the normal type plays the action of the behavioral type after every history, in which case the large player’s type is not revealed in the long run. In such equilibrium, the incentives of the normal type arise from the threat of a punishment phase in which the small players play the best reply to the behavioral type that the normal type dislikes the most. However, we know from the analysis of Section 5 that intertemporal incentives of this sort cannot be provided in our continuoustime setting. 9. POISSON SIGNALS This section examines a variation of the model in which the public signals are driven by a Poisson process, instead of a Brownian motion. First, we derive a recursive characterization of a public sequential equilibrium akin to Theorem 2. Then, for a class of games in which the signals are good news, we show that the sequential equilibrium is unique, Markovian, and characterized by a functional differential equation. As discussed in Section 5, our result that the equilibria of the underlying complete information game are degenerate under Brownian signals (Theorem 3) is reminiscent of Abreu, Milgrom, and Pearce’s (1991) analysis of a repeated prisoners’ dilemma with Poisson signals. This seminal paper shows that
REPUTATION IN CONTINUOUS-TIME GAMES
821
when the arrival of a Poisson signal is good news—evidence that the players are behaving non-opportunistically—the equilibria of the repeated game must collapse to the static Nash equilibrium in the limit as the period length tends to zero, irrespective of the discount rate. On the other hand, when Poisson arrivals are bad news—evidence of non-cooperative behavior—Abreu, Milgrom, and Pearce (1991) showed that the greatest symmetric public equilibrium payoff is greater than, and bounded away from, the static Nash equilibrium payoff as the length of the period shrinks, provided the discount rate is low enough and the signal is sufficiently informative. Thus, in complete information games, the structure of equilibrium is qualitatively different under Poisson and Brownian signals.33 A similar distinction exists in the context of reputation games, and we exploit it below to derive a characterization of sequential equilibrium in the good news case.34 Throughout the section we assume that the public signal, now denoted (Nt )t≥0 , is a counting process with Poisson intensity λ(at b¯ t ) > 0, where λ : A × B → R+ is a continuous function. This means that (Nt )t≥0 is increasing and right-continuous, has left limits everywhere, t takes values in the nonnegative integers, and has the property that Nt − 0 λ(as b¯ s ) ds is a martingale. A public strategy profile is now a random process (at b¯ t )t≥0 with values in A × Δ(B) which is predictable with respect to the filtration generated by (Nt )t≥0 .35 Otherwise, the structure of the reputation game is as described in Section 3. The next proposition, which is analogous to Proposition 1, characterizes the evolution of the small players’ posterior beliefs under Poisson signals. PROPOSITION 6—Belief Consistency: Fix a prior probability p ∈ [0 1] on the behavioral type. A belief process (φt )t≥0 is consistent with a public strategy profile (at b¯ t )t≥0 if and only if φ0 = p and (32)
dφt = φt− (at b¯ t )(dNt − λφt− (at b¯ t ) dt)
33 Sannikov and Skrzypacz (2010) examined this difference in detail, showing that long-run players must use the Brownian and Poisson signals in distinct ways to create incentives. Specifically, Brownian signals can be used effectively only through payoff transfers along tangent hyperplanes, as in Fudenberg, Levine, and Maskin (1994), whereas Poisson jumps can also create incentives by “burning value,” that is, moving orthogonally to the tangent hyperplane. 34 In a recent paper, Board and Meyer-ter-Vehn (2010) also compared the structure of equilibria under Brownian, Poisson good news, and Poisson bad news signals in a product-choice game in continuous time. In the class of games they examined, the firm has two strategic types. 35 For the definition of predictability, see Brémaud (1981, p. 8). Any real-valued process (Yt )t≥0 which is predictable with respect to the filtration (Ft )t≥0 must have the property that each Yt is def measurable with respect to the σ-field Ft− = s
822
E. FAINGOLD AND Y. SANNIKOV
¯ φ) ∈ A × ΔB × [0 1], where for each (a b ¯ = φ(1 − φ) φ(a b) def
¯ − λ(a b)) ¯ (λ(a∗ b) ¯ λφ (a b)
¯ = φλ(a∗ b) ¯ + (1 − φ)λ(a b) ¯ λφ (a b) def
The proof is similar to the proof of Proposition 1, but replaces Girsanov’s theorem and Itô’s formula by their appropriate counterparts in the Poisson setting (cf. Brémaud (1981, pp. 165–168, 337–339)). Equation (32) above means that the posterior jumps by φt − φt− = φt− (at b¯ t ) when a Poisson event arrives, and that between two consecutive arrivals, the posterior follows the differential equation dφt /dt = −φt (1 − φt )(λ(a∗ b¯ t ) − λ(at b¯ t )) ¯ plays here a role which is pathwise deterministic. Thus, coefficient φ(a b) ¯ similar to that played by γ(a b φ) in the Brownian setting, measuring the sensitivity of the belief updating to the fluctuations in the public signal. From t the small players’ viewpoint, the process Nt − 0 λφs− (as b¯ s ) ds is a martingale and so is the belief process (φt )t≥0 . The following proposition, which is analogous to Proposition 2, characterizes the evolution of the continuation value of the normal type. PROPOSITION 7—Continuation Values: A bounded process (Wt )t≥0 is the process of continuation values of the normal type under a strategy profile (at b¯ t )t≥0 if and only if there exists a predictable process (ζt )t≥0 such that (33)
dWt = r(Wt− − g(at b¯ t )) dt + rζt (dNt − λ(at b¯ t ) dt)
The proof is similar to the proof of Proposition 2, but replaces the martingale representation theorem with the analogue result for Poisson-driven martingales (cf. Brémaud (1981, p. 68)). Equation (33) means that the continuation value of the normal type jumps by Wt − Wt− = rζt when a Poisson event arrives and has drift given by r(Wt − g(at b¯ t ) − ζt λ(at b¯ t )) between two consecutive Poisson arrivals. Turning to sequential rationality, for each (φ ζ) ∈ [0 1] × R, consider the ¯ ∈ A × Δ(B) satisfying set M(φ ζ) of all action profiles (a b) ¯ + ζλ(a b) ¯ a ∈ arg max g(a b) a ∈A
¯ + (1 − φ)h(a b b) ¯ b ∈ arg max φh(a∗ b b) b ∈B
¯ ∀b ∈ supp b
The next result, which is analogous to Proposition 3 and has a similar proof, characterizes sequential rationality using these local incentive constraints.
REPUTATION IN CONTINUOUS-TIME GAMES
823
PROPOSITION 8—Sequential Rationality: Let (at b¯ t )t≥0 be a public strategy profile, let (φt )t≥0 be a belief process, and let (ζt )t≥0 be the predictable process from Proposition 7. Then (at b¯ t )t≥0 is sequentially rational with respect to (φt )t≥0 if and only if (34)
(at b¯ t ) ∈ M(φt− ζt ) almost everywhere
We can summarize the recursive characterization of sequential equilibria of Poisson reputation games in the following theorem. THEOREM 9—Sequential Equilibrium: Fix the prior probability p ∈ [0 1] on the behavioral type. A public strategy profile (at b¯ t )t≥0 and a belief process process (φt )t≥0 form a sequential equilibrium with continuation payoffs (Wt )t≥0 for the normal type if and only if there exists a predictable process (ζt )t≥0 such that the following conditions hold: (a) (φt )t≥0 solves equation (32) with initial condition φ0 = p. (b) (Wt )t≥0 is bounded and satisfies equation (33), given (ζt )t≥0 . (c) (at b¯ t )t≥0 satisfies the incentive constraint (34), given (ζt )t≥0 and (φt )t≥0 . 9.1. Good News and Unique Sequential Equilibrium In this section we identify a class of Poisson reputation games with a unique sequential equilibrium. We assume that the payoff functions and the signal structure satisfy conditions similar to those of Proposition 4, and, in addition, assume that the Poisson signals are good news. First, we show that under complete information, the unique equilibrium of the continuous-time game is the repeated play of the static equilibrium (Theorem 10), as in Abreu, Milgrom, and Pearce (1991). Second, for reputation games, we show that the sequential equilibrium is unique, Markovian, and characterized by a functional differential equation (Theorem 11). Finally, we discuss briefly how the analysis would change when the Poisson signals are bad news and, in particular, why we expect the uniqueness and characterization results to break down in this case. We impose the following assumptions: A, B ⊂ R are compact intervals, a∗ = max A, the Poisson intensity depends only on the large player’s action (so we write λ(a) for each a ∈ A), the functions g, h, and λ are twice continuously ¯ ∈ A × B × B, differentiable, and for each (a b b) ¯ ¯ ¯ b) ¯ < 0, g12 (a b) ¯ ≤ 0, (a) g11 (a b) < 0, h22 (a b b) < 0, (h22 + h23 )(a b ¯ b) ¯ ≥ 0, h12 (a b ¯ ≥ 0, (b) g2 (a b) (c) λ (a) > 0, λ (a) ≤ 0, (d) g1 (a∗ b∗ ) < 0, where {b∗ } = B(a∗ ). For future reference, we call this set of assumptions the good news model. Note that conditions (a), (b), and (d) above are similar to conditions (a), (b), and (d) of Proposition 4, albeit slightly stronger (cf. Remark 5). Condition (c)
824
E. FAINGOLD AND Y. SANNIKOV
means that the signals are good news for the large player. Indeed, under λ > 0, the arrival of a Poisson signal is perceived by the small players as evidence in favor of the behavioral type; this is beneficial for the normal type, since under conditions (a) and (b), the flow payoff of the normal type is increasing in reputation. This is in contrast to the case in which the signals are bad news, which arises when conditions (a), (b), and (d) hold but condition (c) is replaced by (c ) λ (a) < 0, λ (a) ≥ 0. We discuss the bad news case briefly at the end of the section. We begin the analysis with the complete information game. In the good news model, the equilibrium of the continuous-time game collapses to the static Nash equilibrium, similar to what happens in the Brownian case (Theorem 3) and in the repeated prisoners’ dilemma of Abreu, Milgrom, and Pearce (1991). THEOREM 10: In the good news model, if the small players are certain that they are facing the normal type (i.e., p = 0), then the unique public equilibrium of the continuous-time game is the repeated play of the unique static Nash equilibrium, irrespective of the discount rate. PROOF: As shown in the first part of the proof of Proposition 4, the static game has a unique Nash equilibrium (aN bN ), where bN is a mass-point distribution. Let (at b¯ t )t≥0 be an arbitrary public equilibrium with continuation values (Wt )t≥0 for the normal type. Suppose, toward a contradiction, that W0 > g(aN bN ). (The proof for the reciprocal case, W0 < g(aN bN ), is similar and therefore is omitted.) By Theorem 9, for some predictable process (ζt )t≥0 , the large player’s continuation value must follow dWt = r(Wt − g(at b¯ t ) − ζt λ(at )) dt + rζt dNt drift
jump size
where at maximizes g(a b¯ t ) + ζt λ(a ) over all a ∈ A and where b¯ t maximizes ¯ def = W0 − g(aN bN ) > 0. h(at b b¯ t ) over all b ∈ Δ(B). Let D ¯ either CLAIM 3: There exists c > 0 such that, as long as Wt ≥ g(aN bN ) + D/2, ¯ or, upon the arrival of a Poisson event, the size the drift of Wt is greater than r D/4 of the jump in Wt is greater than c. The claim follows from the following lemma, whose proof given is given in Appendix F. LEMMA 2: For any ε > 0 there exists δ > 0 such that for all t ≥ 0 and after all public histories, g(at b¯ t ) + ζt λ(at ) ≥ g(aN bN ) + ε implies ζt ≥ δ.
REPUTATION IN CONTINUOUS-TIME GAMES
825
¯ in this lemma gives a δ > 0 with the property that Indeed, letting ε = D/4 whenever g(at b¯ t ) + ζt λ(at ) ≥ rg(aN bN ) + ε, the size of the jump in Wt , which def equals rζt , must be greater than or equal to c = rδ. Moreover, when g(at b¯ t ) + N N N ¯ and Wt ≥ g(a bN ) + D/2, ¯ then the drift of Wt , ζt λ(at ) < rg(a b ) + D/4 ¯ and this which equals r(Wt − g(at b¯ t ) − ζt λ(at )), must be greater than r D/4, concludes the proof of the claim. ¯ = W0 − g(aN bN ) > 0, the claim above readily Since we have assumed that D implies that Wt must grow arbitrarily large with positive probability, and this is Q.E.D. a contradiction since (Wt )t≥0 is bounded. We now turn to the incomplete information case (i.e., p ∈ (0 1)). Theorem 11 below shows that in the good news model, the reputation game has a unique sequential equilibrium, which is Markovian and characterized by a functional differential equation. As in the Brownian case, the equilibrium actions are determined by the posterior on the behavioral type and the equilibrium value function of the normal type. Thus, to state our characterization, we first need to examine how a candidate value function for the normal type affects the incentives of the players. In effect, Proposition 9 below—which is the Poisson counterpart of Proposition 4—establishes the existence, uniqueness, and Lipschitz continuity of action profiles that satisfy the incentive constraint (34), where rζt is the jump in the continuation value Wt when the pair (φt Wt ) is constrained to lie in the graph of a candidate value function V : (0 1) → R. The proof of Proposition 9 is presented in Appendix F. We write C inc ([0 1]) to denote the complete metric space of real-valued continuous increasing functions over the interval [0 1], equipped with the supremum distance. PROPOSITION 9: In the good news model, for each (φ V ) ∈ [0 1] × C inc ([0 1]), there is a unique action profile (a(φ V ) b(φ V )) ∈ A × B that satisfies the incentive constraint (35) (a(φ V ) b(φ V )) ∈ M φ V φ + φ(a(φ V )) − V (φ) /r Moreover, (a b) : [0 1] × C inc ([0 1]) → A × B is a continuous function and, for each φ ∈ [0 1], V → (a(φ V ) b(φ V )) is a Lipschitz continuous function on the metric space C inc ([0 1]), with a Lipschitz constant that is uniform in φ. The characterization of the sequential equilibrium, presented in Theorem 11 below, uses the differential equation, called the optimality equation, (36)
U (φ) =
rg(a(φ U) b(φ U)) + λ(a(φ U))U(φ) − rU(φ) λφ (a(φ U))φ(a(φ U))
826
E. FAINGOLD AND Y. SANNIKOV
where
def U(φ) = U φ + φ(a(φ U)) − U(φ) and the function (a(·) b(·)) is defined implicitly by (35). This is a functional retarded differential equation, because of the delayed term U(φ + φ) on the right-hand side and the fact that the size of the lag, φ(a(φ U)) is endogenously determined by the global behavior of the solution U over [φ 1) rather than by the value of U at a single point. The main result of this section is the following theorem. THEOREM 11: In the good news model, the correspondence of sequential equilibrium payoffs of the normal type, E : [0 1] → R, is single-valued and coincides, on the interval (0 1), with the unique bounded increasing solution U : (0 1) → R of the optimality equation (36). Moreover, at p ∈ {0 1}, E (p) satisfies the boundary conditions lim U(φ) = E (p) = g(M(p 0)) and
φ→p
lim φ(1 − φ)U (φ) = 0
φ→p
Finally, for each prior p ∈ [0 1], there is a unique public sequential equilibrium, which is Markovian in the small players’ posterior belief: at each time t and after each public history, the equilibrium action profile is (a(φt− U) b(φt− U)), where (a(·) b(·)) is the continuous function defined implicitly by condition (35); the small players’ posterior (φt )t≥1 follows equation (32) with initial condition φ0 = p; and the continuation value of the normal type is Wt = U(φt ). However, equilibrium uniqueness generally breaks down when signals are bad news. Under assumptions (a), (b), (c ), and (d), multiple non-Markovian sequential equilibria may exist, despite the fact that M(φ ζ) remains a singleton for each (φ ζ) ∈ [0 1] × (−∞ 0]. Indeed, this multiplicity already arises in the underlying complete information game, as in the repeated prisoners’ dilemma of Abreu, Milgrom, and Pearce (1991). Intuitively, the large player can have incentives to play an action different from the static best reply by a threat that if a bad signal arrives, he will be punished by perpetual reversion to the static Nash equilibrium.36 In the reputation game, multiple non-Markovian equilibria may exist for a similar reason. Recall that in the Markov perfect equilibrium of a game in which Poisson signals are good news, the reaction of the large player’s continuation payoff to the arrival of a signal is completely determined by the updated beliefs and the equilibrium value function. Payoffs above those in the Markov perfect equilibrium cannot be sustained by a threat of reversion to the Markov equilibrium, because those punishments would have to be applied after good news, and therefore they cannot create incentives to play 36
Naturally, for such profile to be an equilibrium, the signals must be sufficiently informative.
REPUTATION IN CONTINUOUS-TIME GAMES
827
actions closer to a∗ . By contrast, when signals are bad news, such punishments can create incentives to play actions closer to a∗ effectively. While we do not characterize sequential equilibria for the case in which the signals are bad news, we conjecture that the upper and lower boundaries of the graph of the correspondence of sequential equilibrium payoffs of the large player solve a pair of (coupled) functional differential equations. Incentive provision must satisfy the feasibility condition that after bad news, the belief– continuation value pair transitions to a point between the upper and lower boundaries. Subject to this feasibility condition, payoff maximization implies that when the initial belief–continuation pair is on the upper boundary of the graph of the correspondence of sequential equilibrium payoffs, conditional on the absence of bad news, it must stay on the upper boundary. A similar statement must hold for the lower boundary. This observation gives rise to a coupled pair of differential equations that characterize the upper and lower boundaries. We leave the formalization of this conjecture for future research. 10. CONCLUSION Our result that many continuous-time reputation games have a unique public sequential equilibrium, which is Markovian in the population’s belief, does not have an analogue in discrete time. One may wonder what happens to the equilibria of discrete-time games in the limit as the time between actions Δ shrinks to 0, as in Abreu, Milgrom, and Pearce (1991), Faingold (2008), Fudenberg and Levine (2007, 2009), and Sannikov and Skrzypacz (2007, 2010). While it is beyond the scope of this paper to answer this question rigorously, we will make some guided conjectures and leave the formal analysis to future research. To be specific, consider what happens as the length of the period of fixed actions, Δ, shrinks to zero in a game with Brownian signals that satisfy Conditions 1 and 2. For a fixed value of Δ > 0, the upper and lower boundaries of the set of equilibrium payoffs will generally be different. For a given action profile, it is uniquely determined how the small players’ belief responds to the public signal, but there is some room to choose continuation values within the bounds of the equilibrium payoff set. Consider the task of maximizing the expected payoff of the large player by the choice of a current equilibrium action and feasible continuation values. In many instances the solution to this optimization problem would take the form of a tail test: the large player’s continuation values are chosen on the upper boundary of the equilibrium payoff set unless the public signal falls below a cutoff, in which case continuation values are taken from the lower boundary. This way of providing incentives is reminiscent of the use of the cutoff tests to trigger punishments in Sannikov and Skrzypacz (2007). As in the complete-information game of Section 5, it becomes less and less efficient to use such tests to provide incentives as Δ → 0. Therefore, it is natural to conjecture that as Δ → 0, the distance between the upper and lower boundaries of the equilibrium value set converges to 0.
828
E. FAINGOLD AND Y. SANNIKOV
APPENDIX A: PROOF OF LEMMA 1 ¯ β) ∈ Fix an arbitrary constant M > 0. Consider the set Φ0 of all tuples (a b d A × Δ(B) × R satisfying (37)
¯ + β · μ(a b) ¯ a ∈ arg max g(a b) a ∈A
¯ ∀b ∈ supp b ¯ b ∈ arg max h(a b b) b ∈B
¯ ≥ v¯ + ε g(a b)
and |β| ≤ M. Note that Φ0 is a compact set, as it is a closed subset of the compact space A × Δ(B) × {β ∈ Rd : |β| ≤ M}, where Δ(B) is equipped with the ¯ β) → |β| achieves weak* topology. Therefore, the continuous function (a b its minimum, η, on Φ0 , and we must have η > 0 because of the condition ¯ ≥ v¯ +ε. It follows that |β| ≥ δ def = min{M η} for any (a b β) that satisfy g(a b) conditions (37). ¯ φ) APPENDIX B: BOUNDS ON COEFFICIENT γ(a b ¯ φ) which is used in the This technical appendix proves a bound on γ(a b subsequent analysis. LEMMA B.1: Assume Condition 1. There exists a constant C > 0 such that for ¯ φ z) ∈ A × Δ(B) × (0 1) × R, all (a b ¯ ∈ N (φ z) (a b)
⇒
(1 + |z|)
¯ φ)| |γ(a b ≥ C φ(1 − φ)
PROOF: If the thesis of the lemma were false, there would be a sequence (an b¯ n φn zn )n∈N with φn ∈ (0 1) and (an b¯ n ) ∈ N (φn zn ) for all n ∈ N, such that both zn |γ(an b¯ n φn )|/(φn (1 − φn )) and |γ(an b¯ n φn )|/(φn (1 − φn )) converged to 0. Passing to a subsequence if necessary, we can assume that ¯ φ) ∈ A × Δ(B) × [0 1]. Then, by the de(an b¯ n φn ) converges to some (a b ¯ must be a finition of N and the continuity of g, μ, σ, and h, the profile (a b) Bayesian Nash equilibrium of the static game with prior φ, since zn (μ(a∗ b¯ n ) − μ(an b¯ n )) (σ(b¯ n )σ(b¯ n ) )−1 = zn γ(an b¯ n φn ) σ(b¯ n )−1 /(φn (1 − φn )) → 0. ¯ by Condition 1, and therefore lim infn |γ(an b¯ n ¯ = μ(a∗ b) Hence, μ(a b) ¯ −1 (μ(a∗ b) ¯ − μ(a b))| ¯ > 0, which is a contradicφn )|/(φn (1 − φn )) ≥ |σ(b) tion. Q.E.D. The following corollary shows that the right-hand side of the optimality equation satisfies a quadratic growth condition whenever the beliefs are bounded away from 0 and 1. This technical result is used in the existence proofs of Appendices C and D.
REPUTATION IN CONTINUOUS-TIME GAMES
829
COROLLARY B.1—Quadratic Growth: Assume Condition 1. For all M > 0 ¯ ∈ A × B, and ε > 0 there exists K > 0 such that for all φ ∈ [ε 1 − ε], (a b) u ∈ [−M M], and u ∈ R, ¯ ∈ N (φ φ(1 − φ)u /r) (a b) ¯ 2u 2r(u − g(a b)) ≤ K(1 + |u |2 ) + ⇒ ¯ φ)|2 1−φ |γ(a b The proof follows directly from Lemma B.1 and the bounds u ∈ [−M M] and φ ∈ [ε 1 − ε]. APPENDIX C: APPENDIX FOR SECTION 6 Throughout this section, we maintain Conditions 1 and 2(a). For the case when Condition 2(b) holds, all the arguments in this section remain valid provided we change the definition of N and set N (φ z) equal to N (φ 0) for z < 0. Proposition C.4 shows that under Conditions 1 and 2(b), the resulting solution U must be increasing, so the values of N (φ z) for z < 0 are irrelevant. C.1. Existence of a Bounded Solution of the Optimality Equation In this subsection we prove the following proposition. PROPOSITION C.1: The optimality equation (19) has at least one C 2 solution ¯ of feasible payoffs of the large player. that takes values in the interval [g g] ¯ The proof relies on standard results from the theory of boundary-value problems for second-order equations. We now review the part of that theory that is relevant for our existence result. Given a continuous function H : [a b] × R2 → R and real numbers c and d, consider the boundary-value problem (38)
U (x) = H(x U(x) U (x)) U(a) = c
x ∈ [a b]
U(b) = d
Given real numbers α and β, we are interested in sufficient conditions for (38) to admit a C 2 solution U : [a b] → R with α ≤ U(x) ≤ β for all x ∈ [a b]. One such sufficient condition is called the Nagumo condition, which posits the existence of a positive continuous function ψ : [0 ∞) → R that satisfies ∞ v dv =∞ ψ(v) 0
830
E. FAINGOLD AND Y. SANNIKOV
and |H(x u u )| ≤ ψ(|u |)
∀(x u u ) ∈ [a b] × [α β] × R
In the proof of Proposition C.1 below, we use the following standard result, which follows from Theorems II.3.1 and I.4.4 in de Coster and Habets (2006): LEMMA C.1: Suppose that α ≤ c ≤ β, that α ≤ d ≤ β, and that H : [a b] × R2 → R satisfies the Nagumo condition relative to α and β. Then the following statements hold: (a) The boundary-value problem (38) admits a solution that satisfies α ≤ U(x) ≤ β for all x ∈ [a b]. (b) There is a constant R > 0 such that every C 2 function U : [a b] → R that satisfies α ≤ U(x) ≤ β for all x ∈ [a b] and solves U (x) = H(x U(x) U (x))
x ∈ [a b]
satisfies |U (x)| ≤ R for all x ∈ [a b]. We are now ready to prove Proposition C.1. PROOF OF PROPOSITION C.1: Since the right-hand side of the optimality equation blows up at φ = 0 and φ = 1, our strategy of proof is to construct the solution as the limit of a sequence of solutions on expanding closed subintervals of (0 1). Indeed, let H : (0 1) × R2 → R denote the right-hand side of the optimality equation, that is, H(φ u u ) =
2r(u − g(N (φ φ(1 − φ)u /r))) 2u + 1−φ |γ(N (φ φ(1 − φ)u /r) φ)|2
and for each n ∈ N, consider the boundary-value problem U (φ) = H(φ U(φ) U (φ))
φ ∈ [1/n 1 − 1/n]
¯ U(1/n) = g U(1 − 1/n) = g ¯ By Corollary B.1, there exists a constant Kn > 0 such that |H(φ u u )| ≤ Kn (1 + |u |2 ) ¯ × R ∀(φ u u ) ∈ [1/n 1 − 1/n] × [g g] ¯ ∞ Since 0 Kn−1 (1 + v2 )−1 v dv = ∞, for each n ∈ N, the boundary-value prob¯ lem above satisfies the hypothesis of Lemma C.1 relative to α = g and β = g. Therefore, for each n ∈ N, there exists a C 2 function Un : [1/n 1 ¯− 1/n] → R ¯ which solves the optimality equation on [1/n 1 − 1/n] and satisfies g ≤ Un ≤ g. ¯
REPUTATION IN CONTINUOUS-TIME GAMES
831
Since for m ≥ n, the restriction of Um to [1/n 1 − 1/n] also solves the optimality equation on [1/n 1 − 1/n], by Lemma C.1 and the quadratic growth condition above, the first and second derivatives of Um are uniformly bounded for m ≥ n, and hence the sequence (Um Um )m≥n is bounded and equicontinuous over the domain [1/n 1 − 1/n]. By the Arzelà–Ascoli theorem, for every n ∈ N, there exists a subsequence of (Um Um )m≥n which converges uniformly on [1/n 1 − 1/n]. Then using a diagonalization argument, we can find a subsequence of (Un )n∈N , denoted (Unk )k∈N , which converges pointwise to a con¯ such that on every closed tinuously differentiable function U : (0 1) → [g g] ¯ subinterval of (0 1), the convergence takes place in C 1 . Finally, U must solve the optimality equation on (0 1), since Unk (φ) = H(φ Unk (φ) Un k (φ)) converges to H(φ U(φ) U (φ)) uniformly on every closed subinterval of (0 1), by the continuity of H and the uniform convergence (Unk Un k ) → (U U ) on closed subintervals of (0 1). Q.E.D. C.2. Boundary Conditions PROPOSITION C.2: If U is a bounded solution of the optimality equation (19) on (0 1), then it satisfies the following boundary conditions at p = 0 and 1: (39)
lim U(φ) = g(N (p 0))
φ→p
lim φ(1 − φ)U (φ) = 0
φ→p
lim φ2 (1 − φ)2 U (φ) = 0
φ→p
The proof follows directly from Lemmas C.4, C.5, and C.6 below. Lemmas C.2 and C.3 are intermediate steps. LEMMA C.2: If U : (0 1) → R is a bounded solution of the optimality equation, then U has bounded variation. PROOF: Suppose there exists a bounded solution U of the optimality equation with unbounded variation near p = 0 (the case p = 1 is similar). Then let (φn )n∈N be a decreasing sequence of consecutive local maxima and minima of U such that φn is a local maximum for n odd and a local minimum for n even. Thus for n odd, we have U (φn ) = 0 and U (φn ) ≤ 0. From the optimality equation, it follows that g(N (φn 0)) ≥ U(φn ). Likewise, for n even, we have g(N (φn 0)) ≤ U(φn ). Thus, the total variation of g(N (φ 0)) on (0 φ1 ] is no smaller than the total variation of U and, therefore, g(N (φ 0)) has unbounded variation near zero. However, this is a contradiction, since Q.E.D. g(N (φ 0)) is Lipschitz continuous under Condition 2. LEMMA C.3: Let U : (0 1) → R be any bounded continuously differentiable function. Then lim inf φU (φ) ≤ 0 ≤ lim sup φU (φ) φ→0
φ→0
832
E. FAINGOLD AND Y. SANNIKOV
and lim inf(1 − φ)U (φ) ≤ 0 ≤ lim sup(1 − φ)U (φ) φ→1
φ→1
PROOF: Suppose, toward a contradiction, that lim infφ→0 φU (φ) > 0 (the case lim supφ→0 φU (φ) < 0 is analogous). Then, for some c > 0 and φ¯ > 0, ¯ which implies U (φ) ≥ c/φ for we must have φU (φ) ≥ c for all φ ∈ (0 φ], ¯ all φ ∈ (0 φ]. But then U cannot be bounded, since the antiderivative of 1/φ, which is log φ, tends to ∞ as φ → 0, a contradiction. The proof for the case φ → 1 is analogous. Q.E.D. LEMMA C.4: If U is a bounded solution of the optimality equation, then limφ→p φ(1 − φ)U (φ) = 0 for p ∈ {0 1}. PROOF: Suppose, toward a contradiction, that φU (φ) 0 as φ → 0. Then, by Lemma C.3, lim inf φU (φ) ≤ 0 ≤ lim sup φU (φ) φ→0
φ→0
with at least one strict inequality. Without loss of generality, assume lim supφ→0 φU (φ) > 0. Hence there exist constants 0 < k < K such that φU (φ) crosses levels k and K infinitely many times in a neighborhood of 0. ¯ φ)| ≥ Cφ whenever Thus, by Lemma B.1, there exists C > 0 such that |γ(a b 1 φU (φ) ∈ (k K) and φ ∈ (0 2 ). On the other hand, by the optimality equation, for some constant L > 0, we have |U (φ)| ≤ φL2 . This bound implies that for all φ ∈ (0 12 ) such that φU (φ) ∈ (k K), (φU (φ)) ≤ |φU (φ)| + |U (φ)| = 1 + |φU (φ)| |U (φ)| |U (φ)| L |U (φ)| ≤ 1+ k which implies |U (φ)| ≥
|(φU (φ)) | 1 + L/k
It follows that on every interval where φU (φ) crosses k and stays in (k K) until crossing K, the total variation of U is at least (K − k)/(1 + L/k). Since this happens infinitely many times in a neighborhood of φ = 0 function U must have unbounded variation in that neighborhood, and this is a contradiction (by virtue of Lemma C.2). The proof that limφ→1 (1 − φ)U (φ) = 0 is analogous. Q.E.D.
REPUTATION IN CONTINUOUS-TIME GAMES
833
LEMMA C.5: If U : (0 1) → R is a bounded solution of the optimality equation, then for p ∈ {0 1}, lim U(φ) = g(N (p 0))
φ→p
PROOF: First, by Lemma C.2, U must have bounded variation and so the limφ→p U(φ) exists. Consider p = 0 and assume, toward a contradiction, that limφ→0 U(φ) = U0 < g(aN bN ) where (aN bN ) = N (0 0) is the Nash equilibrium of the stage game (the proof for the reciprocal case is similar). By Lemma C.4, limφ→0 φU (φ) = 0, which implies that N (φ φ(1 − φ)U (φ)/r) converges to (aN bN ) as φ → 0. Recall the optimality equation U (φ) = =
2U (φ) 2r(U(φ) − g(N (φ φ(1 − φ)U (φ)/r))) + 1−φ |γ(N (φ φ(1 − φ)U (φ)/r) φ)|2 2U (φ) h(φ) + 1−φ φ2
where h(φ) is a continuous function that converges to 2r(U0 − g(aN bN )) <0 |σ(bN )−1 (μ(a∗ bN ) − μ(aN bN ))|2 as φ → 0. Since U (φ) = o(1/φ) by Lemma C.3, it follows that for some φ¯ > 0, ¯ But there exists a constant K > 0 such that U (φ) < −K/φ2 for all φ ∈ (0 φ). then U cannot be bounded, since the second-order antiderivative of −1/φ2 , which is log φ, tends to −∞ as φ → 0. The proof for the case p = 1 is similar. Q.E.D. LEMMA C.6: If U : (0 1) → R is a bounded solution of the optimality equation, then lim φ2 (1 − φ)2 U (φ) = 0 for
φ→p
p ∈ {0 1}
PROOF: Consider p = 1. Fix an arbitrary M > 0 and choose φ ∈ (0 1) so ¯ exists C > that (1 − φ)|U (φ)| < M for all φ ∈ (φ 1). By Lemma B.1, there ¯ φ)| ≥ C(1 − φ) for all φ ∈ (φ 1). 0 such that |γ(N (φ φ(1 − φ)U (φ)/r) ¯ Hence, by the optimality equation, for all φ ∈ (φ 1), we have ¯ (1 − φ)2 |U (φ)| ≤ 2(1 − φ)|U (φ)| + (1 − φ)2
2r|U(φ) − g(N (φ φ(1 − φ)U (φ)/r))| |γ(N (φ φ(1 − φ)U (φ)/r) φ)|2
834
E. FAINGOLD AND Y. SANNIKOV
≤ 2(1 − φ)|U (φ)| + 2rC −2 U(φ) − g N (φ φ(1 − φ)U (φ)/r) →0
as
φ → 1
by Lemmas C.4 and C.5. The case p = 0 is analogous.
Q.E.D.
C.3. Uniqueness LEMMA C.7: If two bounded solutions of the optimality equation, U and V , satisfy U(φ0 ) ≤ V (φ0 ) and U (φ0 ) ≤ V (φ0 ) with at least one strict inequality, then U(φ) < V (φ) and U (φ) < V (φ) for all φ > φ0 . Similarly, if U(φ0 ) ≤ V (φ0 ) and U (φ0 ) ≥ V (φ0 ) with at least one strict inequality, then U(φ) < V (φ) and U (φ) > V (φ) for all φ < φ0 . PROOF: Suppose that U(φ0 ) ≤ V (φ0 ) and U (φ0 ) < V (φ0 ). If U (φ) < V (φ) for all φ > φ0 , then we must also have U(φ) < V (φ) on that range. Otherwise, let
def
φ1 = inf{φ ∈ [φ0 1) : U (φ) ≥ V (φ)} Then U (φ1 ) = V (φ1 ) by continuity, and U(φ1 ) < V (φ1 ) since U(φ0 ) ≤ V (φ0 ) and U (φ) < V (φ) on [φ0 φ1 ). By the optimality equation, it follows that U (φ1 ) < V (φ1 ), and hence U (φ1 −ε) > V (φ1 −ε) for sufficiently small ε > 0, and this contradicts the definition of φ1 . For the case when U(φ0 ) < V (φ0 ) and U (φ0 ) = V (φ0 ), the optimality equation implies that U (φ0 ) < V (φ0 ). Therefore, U (φ) < V (φ) on (φ0 φ0 + ε) and the argument proceeds as above. Finally, the argument for φ < φ0 when U(φ0 ) ≤ V (φ0 ) and U (φ0 ) ≥ V (φ0 ) with at least one strict inequality is similar. Q.E.D. PROPOSITION C.3: The optimality equation has a unique bounded solution. PROOF: By Proposition C.1, a bounded solution of the optimality equation exists. Suppose U and V are two such bounded solutions. Assuming that V (φ) > U(φ) for some φ ∈ (0 1), let φ0 ∈ (0 1) be the point where the difference V − U is maximized, which is well defined because limφ→p U(φ) = limφ→p V (φ) for p ∈ {0 1} by Proposition C.2. Thus we have V (φ0 ) − U(φ0 ) > 0 and V (φ0 ) − U (φ0 ) = 0. But then, by Lemma C.7, the difference V (φ) − Q.E.D. U(φ) must be strictly increasing for φ > φ0 , a contradiction. Finally, the following proposition shows that under Conditions 1 and 2(b), the unique solution U : (0 1) → R of the modified optimality equation in which N (φ z) is set equal to N (φ 0) when z < 0 must be an increasing function. In particular, U must be the unique bounded increasing solution of the optimality equation.
835
REPUTATION IN CONTINUOUS-TIME GAMES
PROPOSITION C.4: Under Conditions 1 and 2(b), U : (0 1) → R is an increasing function and is, therefore, the unique bounded increasing solution of the optimality equation. PROOF: By Proposition C.3, the modified optimality equation—with N (φ z) set equal to N (φ 0) when z < 0—has a unique bounded solution U : (0 1) → R, which satisfies the boundary conditions limφ→0 U(φ) = g(N (0 0)) and limφ→1 U(φ) = g(N (1 0)) by Lemma C.5. Toward a contradiction, suppose that U is not increasing, so that U (φ) < 0 for some φ ∈ (0 1). Take a maximal subinterval (φ0 φ1 ) ⊆ (0 1) on which U is strictly decreasing. Since g(N (φ 0)) is increasing in φ, we have limφ→0 U(φ) = g(N (0 0)) ≤ g(N (1 0)) = limφ→1 U(φ), hence (φ0 φ1 ) = (0 1). Without loss of generality, assume φ1 < 1. Then φ1 must be an interior local minimum, so U (φ1 ) = 0 and U (φ1 ) ≥ 0. Also, we must have U(φ1 ) ≥ g(N (φ1 0)), for otherwise U (φ1 ) =
2r(U(φ1 ) − g(N (φ1 0))) < 0 |γ(N (φ1 0) φ1 )|2
But then, since lim U(φ) > U(φ1 ) ≥ g(N (φ1 0)) ≥ g(N (0 0)) = lim U(φ)
φ→φ0
φ→0
it follows that φ0 > 0. Therefore, U (φ0 ) = 0 and U (φ0 ) = ≥
2r(U(φ0 ) − g(N (φ0 0))) |γ(N (φ0 0) φ0 )|2 2r(U(φ0 ) − g(N (φ1 0))) > 0 |γ(N (φ0 0) φ0 )|2
so φ0 is a strict local minimum, a contradiction.
Q.E.D.
C.4. The Continuity Lemma Used in the Proof of Theorem 4 LEMMA C.8: Let U : (0 1) → R be the unique bounded solution of the optimality equation and let d : A × Δ(B) × [0 1] → R and f : A × Δ(B) × [0 1] × R → R be the continuous functions defined by ⎧ 2 ⎪ ¯ − |γ(a b φ)| U (φ) ⎪ r(U(φ) − g(a b)) ⎪ ⎪ ⎨ 1−φ def 1 ¯ d(a b φ) = (40) 2 − |γ(a b φ)| U (φ) φ ∈ (0 1) ⎪ ⎪ 2 ⎪ ⎪
⎩ ¯ φ = 0 or 1 r g(N (φ 0)) − g(a b)
836
E. FAINGOLD AND Y. SANNIKOV
and (41)
¯ φ β) f (a b ¯ − φ(1 − φ)(μ(a∗ b) ¯ − μ(a b)) ¯ (σ(b) ¯ )−1 U (φ) = rβ σ(b)
def
γ(abφ)
¯ φ β) that satisfy For every ε > 0, there exists δ > 0 such that for all (a b (42)
¯ + β · μ(a b) ¯ a ∈ arg max g(a b) a ∈A
¯ + (1 − φ)h(a b b) ¯ b¯ ∈ arg max φh(a∗ b b) b ∈B
¯ ∀b ∈ supp b
¯ φ) > −ε or f (a b ¯ φ β) ≥ δ. either d(a b PROOF: Since φ(1 − φ)U (φ) is bounded (by Proposition C.2) and there ¯ · y| ≥ c|y| for all y ∈ Rd and b¯ ∈ Δ(B), there exist exists c > 0 such that |σ(b) ¯ φ β)| > m for all β ∈ Rd with constants M > 0 and m > 0 such that |f (a b |β| > M. Consider the set Φ of all tuples (a b φ β) ∈ A × Δ(B) × [0 1] × Rd with ¯ φ) ≤ −ε. Since U satisfies the boundary |β| ≤ M that satisfy (42) and d(a b conditions (39) by Proposition C.2, d is a continuous function and hence Φ is a closed subset of the compact set {(a b φ β) ∈ A × Δ(B) × [0 1] × Rd : |β| ≤ M}, and therefore Φ is a compact set.37 The boundary conditions (39) also imply that the function |f | is continuous, so it achieves its minimum, denoted η, ¯ φ β) ∈ Φ, we have d(a b ¯ φ) = 0 on Φ. We must have η > 0, since for all (a b ¯ φ β) = 0 by the optimality equation, as we argued in the whenever f (a b ¯ φ β) satisfying (42), either proof of Theorem 4. It follows that for all (a b def ¯ φ) > −ε or |f (a b ¯ φ β)| ≥ δ = min{m η}, as required. d(a b Q.E.D. C.5. Proof of Theorem 5 First, we need the following lemma: LEMMA C.9: Under Conditions 1, 2, and 3, for any φ0 ∈ (0 1) and k > 0, the initial value problem (43)
v (φ) =
! (φ φ(1 − φ)v(φ)))) 2v(φ) 2(g(a∗ b∗ ) − g(N + ! (φ φ(1 − φ)v(φ)) φ)|2 1−φ |γ(N
v(φ0 ) = k 37
Recall that Δ(B) is compact in the topology of weak convergence of probability measures.
REPUTATION IN CONTINUOUS-TIME GAMES
837
where N (φ z) if Condition 2(a) holds, def ! N (φ z) = N (φ max{z 0}) if Condition 2(b) holds, has a unique solution on the interval (0 1). Moreover, the solution satisfies lim infφ→1 v(φ) < 0. PROOF: Assume Conditions 1, 2(a), and 3. (The proof for the case when Condition 2(b) holds is very similar and thus is omitted.) Fix φ0 ∈ (0 1) and k > 0. First, note that on every compact subset of (0 1) × R, the right-hand side of (43) is Lipschitz continuous in (φ v) by Conditions 1 and 2(a) and Lemma B.1. This implies that a unique solution exists on a neighborhood of φ0 . Let Jφ0 denote the maximal interval of existence/uniqueness of the solution of (43), and let us show that Jφ0 = (0 1). Indeed, the Lipschitz continuity of g, Condition 3, and the bound from Lemma B.1 imply that the solution of (43) satisfies ∀ε > 0 ∃Kε > 0 such that
|v (φ)| ≤ Kε 1 + |v(φ)| ∀φ ∈ Jφ0 ∩ [ε 1 − ε] Given this linear growth condition, a standard argument shows that |v| and |v | cannot blow up in any closed sub-interval of (0 1). Thus, we must have Jφ0 = (0 1), that is, the solution v is well defined on the whole interval (0 1). It remains to show that lim infφ→1 v(φ) < 0, but first we will prove the intermediate result that lim supφ→1 (1 − φ)v(φ) ≤ 0. By the large player’s incentive constraint in the definition of N , for each (a b φ z) ∈ A × B × (0 1) × R with (a b) ∈ N (φ z), (44)
g(a b) − g(a∗ b) ≥ z(μ(a∗ b) − μ(a b)) (σ(b)σ(b) )−1 (μ(a∗ b) − μ(a b)) = z|σ(b)−1 (μ(a∗ b) − μ(a b))|2 = z|γ(a b φ)|2 /(φ(1 − φ))2
This implies that, for each φ ∈ (0 1) and (a b) ∈ N (φ φ(1 − φ)v(φ)), (45)
2(g(a∗ b∗ ) − g(a b)) |γ(a b φ)|2 =
2(g(a∗ b∗ ) − g(a∗ b)) 2(g(a∗ b) − g(a b)) + |γ(a b φ)|2 |γ(a b φ)|2
≤
2|g(a∗ b∗ ) − g(a∗ b)| 2v(φ) − 2 |γ(a b φ)| φ(1 − φ)
838
E. FAINGOLD AND Y. SANNIKOV
Moreover, by the Lipschitz continuity of g and Condition 3, there is a constant K1 > 0 such that for each φ ∈ (0 1) and (a b) ∈ N (φ φ(1 − φ)v(φ)), |g(a∗ b∗ ) − g(a∗ b)| ≤ K1 φ−1 |γ(a b φ)| Plugging this inequality into (45) yields, for each φ ∈ (0 1) and (a b) ∈ N (φ φ(1 − φ)v(φ)), 2K1 2(g(a∗ b∗ ) − g(a b)) 2v(φ) + ≤− |γ(a b φ)|2 φ(1 − φ) φ|γ(a b φ)| But since, by Lemma B.1, there is a constant C > 0 such that for each φ ∈ (0 1) and (a b) ∈ N (φ φ(1 − φ)v(φ)), |γ(a b φ)| ≥
Cφ(1 − φ) 1 + φ(1 − φ)|v(φ)|
it follows that for each φ ∈ (0 1) and (a b) ∈ N (φ φ(1 − φ)v(φ)), 2(g(a∗ b∗ ) − g(a b)) 2K1 (1 + φ(1 − φ)|v(φ)|) 2v(φ) + ≤− 2 |γ(a b φ)| φ(1 − φ) Cφ2 (1 − φ) Plugging this inequality into (43) and simplifying yields v (φ) ≤ C1 |v(φ)| +
C2 1−φ
∀φ ∈ [φ0 1)
where C1 = 2(K1 /C − 1)/φ0 and C2 = 2K1 /(Cφ20 ). This differential inequality implies φ −C1 (x−φ0 ) e C1 (φ−φ0 ) C1 (φ−φ0 ) dx ∀φ ∈ [φ0 1) v(φ) ≤ ke + C2 e 1−x φ0 and, hence, v(φ) ≤ C3 − C4 log(1 − φ) ∀φ ∈ [φ0 1) where C3 and C4 are positive constants. But since limx→0 x log x = 0, it follows that lim supφ→1 (1 − φ)v(φ) ≤ 0, as was to be proved. Finally, let us show that lim infφ→1 v(φ) < 0. Suppose not, that is, suppose lim infφ→1 v(φ) ≥ 0. Then we must have limφ→1 (1 − φ)v(φ) = 0, since we have already shown that lim supφ→1 (1 − φ)v(φ) ≤ 0 above. Hence, limφ→1 N (φ ˜ b∗ ), where a˜ ∈ arg maxa∈A g(a b∗ ). But since φ(1 − φ)v(φ)) = N (1 0) = (a ∗ ∗ ∗ ˜ b ) < 0 by Condition 1 and μ(a∗ b∗ ) = μ(a ˜ b∗ ) by Condig(a b ) − g(a tion 3(a), there is a constant K2 > 0 such that 2(g(a∗ b∗ ) − g(N (φ φ(1 − φ)v(φ))) 2K2 ≤− 2 |γ(N (φ φ(1 − φ)v(φ)) φ)| (1 − φ)2
∀φ ≈ 1
REPUTATION IN CONTINUOUS-TIME GAMES
839
Also, since limφ→1 (1 − φ)v(φ) = 0, we have K2 2v(φ) ≤ 1 − φ (1 − φ)2
∀φ ≈ 1
Plugging these two inequalities into (43) and simplifying yields v (φ) ≤ −
K2 (1 − φ)2
∀φ ≈ 1
which implies limφ→1 v(φ) = −∞. But this is a contradiction, since we have Q.E.D. assumed lim infφ→1 v(φ) ≥ 0. We are now ready to prove Theorem 5. PROOF OF THEOREM 5: Fix an arbitrary φ0 ∈ (0 1). If we show that limr→0 Ur (φ0 )/r = ∞, it will follow that limr→0 ar (φ0 ) = a∗ , since there is a constant K0 > 0 such that for each r > 0, g¯ − g ≥ g(ar (φ0 ) br (φ0 )) − g(a∗ br (φ0 )) ¯ ≥ K0 |a∗ − ar (φ0 )|2 Ur (φ0 )/r by the large player’s incentive constraint (44) and Condition 3(a). Since limr→0 ar (φ0 ) = a∗ implies limr→0 br (φ0 ) = b∗ , to conclude the proof, we need only show that limr→0 Ur (φ0 )/r = ∞. En route to a contradiction, suppose there is some k > 0 such that lim infr→0 Ur (φ0 )/r ≤ k. CLAIM 4: ∀ε > 0, ∀φ1 ∈ (φ0 1) ∃¯r > 0 such that ∀r ∈ (0 r¯], Ur (φ0 ) ≤ kr
⇒
Ur (φ) ≤ r(v(φ) + ε)
∀φ ∈ [φ0 φ1 ]
where v is the unique solution of the initial value problem (43). To prove this claim, fix ε > 0, and φ1 ∈ (φ0 1) and recall that, by Theorem 1, for each δ > 0 there exists rδ > 0 such that for each 0 < r < rδ and φ ∈ [φ0 φ1 ], we have Ur (φ) < g(a∗ b∗ ) + δ and, hence, Ur (φ) <
2Ur (φ) 1−φ +
2r(δ + g(a∗ b∗ ) − g(N (φ φ(1 − φ)Ur (φ)/r))) |γ(N (φ φ(1 − φ)Ur (φ)/r) φ)|2
Thus, for each δ > 0 and 0 < r < rδ , (46)
Ur (φ0 ) ≤ kr
⇒
Ur (φ) ≤ Vrδ (φ) ∀φ ∈ [φ0 φ1 ]
840
E. FAINGOLD AND Y. SANNIKOV
where Vrδ solves the initial value problem38 Vrδ (φ) =
2Vrδ (φ) 1−φ +
! (φ φ(1 − φ)Vrδ (φ)/r))) 2r(δ + g(a∗ b∗ ) − g(N ! (φ φ(1 − φ)Vrδ (φ)/r) φ)|2 |γ(N
Vrδ (φ0 ) = kr where
N (φ z) ! N (φ z) = N (φ max{z 0})
if Condition 2(a) holds, if Condition 2(b) holds.
Clearly, Vrδ must be homogeneous of degree 1 in r, so it must be of the form Vrδ (φ) = rvδ (φ), where vδ is independent of r and solves the initial value problem (47)
vδ (φ) =
2vδ (φ) 1−φ +
! (φ φ(1 − φ)vδ (φ))) 2(δ + g(a∗ b∗ ) − g(N ! (φ φ(1 − φ)vδ (φ)) φ)|2 |γ(N
vδ (φ0 ) = k which coincides with (43) when δ = 0. Thus, by (46), it suffices to show that for some δ > 0, we have vδ (φ) ≤ v(φ) + ε for all φ ∈ [φ0 φ1 ], where v is the unique solution of (43). In effect, over the domain [φ0 φ1 ], the right-hand side of (47) is jointly continuous in (δ φ vδ ) and Lipschitz continuous in vδ uniformly in (φ δ). Therefore, by standard results on existence, uniqueness, and continuity of solutions to ordinary differential equations, for every δ > 0 small enough, a unique solution, vδ , exists on the interval [φ0 φ1 ] and the mapping δ → vδ is continuous in the sup-norm. Hence, for some δ0 > 0 small enough, vδ0 (φ) ≤ v(φ) + ε for all φ ∈ [φ0 φ1 ]. Letting r¯ = rδ0 thus concludes the proof of the claim. The claim above implies (48)
lim inf lim inf Ur (φ)/r < 0 φ→1
r→0
since lim infφ→1 v(φ) < 0 by Lemma C.9 and lim infr→0 Ur (φ0 )/r ≤ k by assumption. Thus, under Condition 2(b), we readily get a contradiction, since in this case Ur (φ) must be increasing in φ for each r > 0, by Theorem 4. 38
Implication (46) follows from the fact that Ur (φ) < Vrδ (φ) whenever Ur (φ) = Vrδ (φ).
REPUTATION IN CONTINUOUS-TIME GAMES
841
Now suppose Condition 2(a) holds. Since g(N (1 0)) > g(a∗ b∗ ) by Condition 1, there is some η > 0 such that g(N (φ 0)) > g(a∗ b∗ ) + η ∀φ ≈ 1 This fact, combined with (48) and the upper bound from Theorem 1, implies that there is some φ1 ∈ (φ0 1) and r > 0 such that (49)
Ur (φ1 ) < 0
and
Ur (φ1 ) < g(a∗ b∗ ) + η < g(N (φ 0))
∀φ ∈ [φ1 1)
We claim that Ur (φ) < g(a∗ b∗ ) + η for all φ ∈ [φ1 1). Otherwise, Ur must have a local minimum at some point φ2 ∈ (φ1 1) where (50)
Ur (φ2 ) < g(a∗ b∗ ) + η
since Ur (φ1 ) < g(a∗ b∗ ) + η and Ur (φ1 ) < 0. Since at the local minimum φ2 , we must have Ur (φ2 ) = 0 and Ur (φ2 ) ≥ 0, the optimality equation implies 0 ≤ U (φ2 ) =
2r(Ur (φ2 ) − g(N (φ2 0))) |γ(N (φ2 0) φ)|2
and, hence, Ur (φ2 ) − g(N (φ2 0)) ≥ 0, which is impossible by (49) and (50). We have thus proved that Ur (φ) < g(a∗ b∗ ) + η for all φ ∈ [φ1 1). But this is a contradiction, since Ur must satisfy the boundary condition limφ→1 Ur (φ) = g(N (1 0)) by Theorem 4. Q.E.D. C.6. Proof of Proposition 4 The proof relies on two lemmas, presented below. Throughout this section we maintain all the assumptions of Proposition 4. LEMMA C.10: N (φ z) = ∅ ∀(φ z) ∈ [0 1] × [0 ∞). PROOF: Fix (φ z) ∈ [0 1] × [0 ∞) and consider the correspondence Γ : A × B ⇒ A × B, ⎧ ⎫ aˆ ∈ arg max g(a b) ⎪ ⎪ ⎪ ⎪ ⎨ ⎬ a −2 ∗ ˆ (μ(a b) − μ(a b))μ(a b) + zσ(b) ˆ b) : Γ (a b) = (a ⎪ ⎪ ⎪ ⎩ ⎭ bˆ ∈ arg max φh(a∗ b b) + (1 − φ)h(a b b) ⎪ b
Thus, an action profile (a b) ∈ A × B belongs to N (φ z) if and only if it is a fixed point of Γ . By Brouwer’s fixed-point theorem, it is enough to show that Γ is single-valued and continuous. Indeed, since g, h, μ, and σ are
842
E. FAINGOLD AND Y. SANNIKOV
continuous, Γ is nonempty-valued and upper hemicontinuous. To see that Γ is actually single-valued (and hence continuous), fix (a b) ∈ A × B and note that the assumptions g11 < 0, h22 < 0, μ1 > 0, and μ11 ≤ 0 imply that g(· b)+zσ(b)−2 (μ(a∗ b)−μ(a b))μ(· b) and φh(a∗ · b)+(1−φ)h(a · b) are strictly concave and hence that Γ (a b) is a singleton. Q.E.D. The proof of Proposition 4 below uses a first-order condition to characterize the action profile (a b) ∈ N (φ z). To express this condition, define a function F : A × B × [0 1] × [0 ∞) → R2 as μ(a∗ b) − μ(a b) def F(a b φ z) = g1 (a b) + z μ1 (a b) (51) σ(b)2 ∗ φh2 (a b b) + (1 − φ)h2 (a b b) Thus, the first-order condition for (a b) ∈ N (φ z) can be written as (52)
F(a b φ z) · (aˆ − a bˆ − b) ≤ 0
ˆ ∈ A × B ˆ b) ∀(a
where the multi dot (·) designates inner product. LEMMA C.11: The function F : A × B × [0 1] × [0 ∞) → R2 , defined by (51), satisfies the following conditions: ˆ zˆ ) ∈ [0 1] × [0 ∞), (A) ∃L > 0 such that ∀(a b) ∈ A × B and ∀(φ z), (φ ˆ zˆ ) − F(a b φ z)| ≤ L|(φ ˆ z) ˆ − (φ z)| |F(a b φ ˆ ∈ ˆ b) (B) ∀K > 0 ∃M > 0 such that ∀(φ z) ∈ [0 1] × [0 K] and ∀(a b), (a A × B, ˆ φ z) − F(a b φ z)) · (aˆ − a bˆ − b) ˆ b (F(a ≤ −M(|aˆ − a|2 + |bˆ − b|2 ) ˆ z) ˆ ∈ [0 1] × [0 ∞), PROOF: For each (a b) ∈ A × B and (φ z), (φ ˆ zˆ ) − F(a b φ z) F(a b φ σ(b)−2 (μ(a∗ b) − μ(a b))μ1 (a b)(zˆ − z) = (h2 (a∗ b b) − h2 (a b b))(φˆ − φ) which yields condition (A) with L = 2 max(ab) σ(b)−2 |μ(a b)||μ1 (a b)| + |h2 (a b b)|.
843
REPUTATION IN CONTINUOUS-TIME GAMES
Turning to condition (B), fix K > 0. By the mean value theorem, for each ˆ ∈ A × B and (φ z) ∈ [0 1] × [0 K], there exists some (a ¯ such ˆ b) ¯ b) (a b), (a that ˆ φ z) − F(a b φ z)) · (aˆ − a bˆ − b) ˆ b (F(a ¯ φ z) aˆ − a ¯ b = [ aˆ − a bˆ − b ] D(ab) F(a bˆ − b ¯ φ z) is the derivative of F(· · φ z) at (a ¯ Thus, ¯ b ¯ b). where D(ab) F(a ˆ φ) − F(a b φ z)) · (aˆ − a bˆ − b) ˆ b (F(a ≤ −M(|aˆ − a|2 + |bˆ − b|2 ) where
" def M = − sup x D(ab) F(a b φ z)x :
# (a b φ z x) ∈ A × B × [0 1] × [0 K] × R2 with |x| = 1
It remains to show that M > 0. Since g, h, and μ are twice continuously differentiable and σ is continuous, the supremum above is attained at some point in A × B × [0 1] × [0 K] × (R2 \ {0}). Thus, it suffices to prove that D(ab) F(a b φ z) is negative definite for each (a b φ z) ∈ A × B × [0 1] × [0 K]. Fix an arbitrary (a b φ z) ∈ A × B × [0 1] × [0 K]. We have g11 + zσ −2 ((μ∗ − μ)μ11 − μ21 ) D(ab) F(a b φ z) = (1 − φ)h12 g12 + zσ −2 ((μ∗ − μ)μ12 + (μ∗2 − μ2 )μ1 ) φ(h∗22 + h∗23 ) + (1 − φ)(h22 + h23 ) where, to save on notation, we omit the arguments of the functions on the right-hand side and use an asterisk (∗ ) to indicate when a function is evaluated at (a∗ b b) or (a∗ b) rather than at (a b b) or (a b). Thus, for each ε ≥ 0, D(ab) F(a b φ z) = Ψ (ε) + Λ(ε) where
def
Ψ (ε) = and
def
Λ(ε) =
g11 (1 − φ)h12
g12 ∗ ∗ φ(h22 + h23 ) + (1 − φ)(h22 + h23 ) + ε
zσ −2 ((μ∗ − μ)μ11 − μ21 ) 0
zσ −2 ((μ∗ − μ) · μ12 + (μ∗2 − μ2 )μ1 ) −ε
844
E. FAINGOLD AND Y. SANNIKOV
The matrix Ψ (0) is negative definite by condition (a), hence there is ε0 > 0 small enough such that Ψ (ε0 ) remains negative definite. Moreover, Λ(ε0 ) is negative semidefinite by condition (c), since ε0 > 0 and z ≥ 0. It follows that D(ab) F(a b φ z) = Ψ (ε0 ) + Λ(ε0 ) is negative definite and, hence, M > 0. Q.E.D. We are now ready to prove Proposition 4. PROOF OF PROPOSITION 4: (a), (b), and (c) ⇒ Condition 2(b). First, N (φ z) is nonempty for each (φ z) ∈ [0 1] × [0 ∞) by Lemma C.10. Moreover, (a b) ∈ N (φ z) if and only if (a b) satisfies the first-order condition (52), because the functions g(· b) + z(μ(a∗ b) − μ(a b))μ(· b) and φh(a∗ · b) + (1 − φ)h(a · b) are differentiable and concave for each fixed (a b). We will now show that N (φ z) is a singleton for each (φ z) ∈ ˆ ∈ N (φ z). ˆ b) [0 1] × [0 ∞). Let (φ z) ∈ [0 1] × [0 ∞) and pick (a b), (a Thus, F(a b φ z) · (aˆ − a bˆ − b) ≤ 0
and
ˆ φ z) · (a − a ˆ ≤0 ˆ b ˆ b − b) F(a by the first-order conditions. Subtracting the former inequality from the latter yields ˆ φ z) − F(a b φ z)) · (aˆ − a bˆ − b) ≥ 0 ˆ b (F(a ˆ by part (B) of Lemma C.11. We have ˆ b), which is possible only if (a b) = (a thus shown that N (φ z) contains a unique action profile for each (φ z) ∈ [0 1] × [0 ∞). Turning to Lipschitz continuity, fix K > 0, and let L > 0 and M > 0 designate the constants from Lemma C.11. Fix φ, φˆ ∈ [0 1] and z, zˆ ∈ [0 K], and let ˆ ∈ N (φ ˆ zˆ ). By the first-order conditions, ˆ b) (a b) ∈ N (φ z) and (a ˆ φ ˆ zˆ ) − F(a b φ z)) · (aˆ − a bˆ − b) ˆ b 0 ≤ (F(a ˆ φ ˆ zˆ ) − F(a b φ ˆ zˆ )) · (aˆ − a bˆ − b) ˆ b = (F(a ˆ zˆ ) − F(a b φ z)) · (aˆ − a bˆ − b) + (F(a b φ ≤ −M(|aˆ − a|2 + |bˆ − b|2 ) $ $ + L |φˆ − φ|2 + |ˆz − z|2 |aˆ − a|2 + |bˆ − b|2 where the last inequality follows from Lemma C.11 and the Cauchy–Schwarz inequality. Therefore, $ $ L 2 2 ˆ |aˆ − a| + |b − b| ≤ |φˆ − φ|2 + |ˆz − z|2 M
REPUTATION IN CONTINUOUS-TIME GAMES
845
and we have thus shown that N is Lipschitz continuous over [0 1] × [0 K]. It remains to show that g(N (φ 0)) is increasing in φ. Let (a(φ) b(φ)) designate the unique static Bayesian Nash equilibrium when the prior is φ. Since a and b are Lipschitz continuous, and hence absolutely continuous, it suffices d da db to show that 0 ≤ dφ g(a(φ) b(φ)) = g1 dφ + g2 dφ almost everywhere, which is db equivalent to showing that g2 dφ ≥ 0 a.e. by the first-order condition of the large player.39 Then, by condition (b), it is enough to show that d b(φ)/dφ has the same sign as h2 (a∗ b(φ) b(φ)) − h2 (a(φ) b(φ) b(φ)) almost everywhere. First, consider the case in which a(φ) and b(φ) are interior solutions, so the first-order conditions are satisfied with equality. By the implicit function theorem, −1 g12 g11 d a/dφ = d b/dφ (1 − φ)h12 φ(h∗22 + h∗23 ) + (1 − φ)(h22 + h23 ) 0 · −φ(h∗2 − h2 ) which implies g11 (1 − φ)h12
d b/dφ = g11 (1 − φ)h12
0 ∗ −φ(h2 − h2 ) g12 ∗ ∗ φ(h22 + h23 ) + (1 − φ)(h22 + h23 )
By condition (a), the denominator is positive and the numerator has the same d sign as h∗2 − h2 . Thus, dφ g(a(φ) b(φ)) ≥ 0 for almost every φ such that (a(φ) b(φ)) is in the interior of A × B. Also, for almost every φ, if either a(φ) or b(φ) is a corner solution, then either (i) b is constant in a neighborhood of φ, and hence g2 d b/dφ = 0 trivially, or (ii) the first-order condition for b holds with equality and a is constant in a neighborhood of φ, in which case differentiating the first-order condition of the small players (with d a/dφ = 0) yields the desired result. (a) and (c) ⇒ Condition 3. Condition 3(a) follows directly from μ1 > 0. Turning to Condition 3(b), for each (a φ) ∈ A × [0 1] let bBR (a φ) ∈ B designate the joint best reply of the small players to (a φ), that is, (53)
bBR (a φ) ∈ arg max φh(a∗ b bBR (a φ)) + (1 − φ)h(a b bBR (a φ)) b
To see why such bBR (a φ) must be unique, note that a necessary (and sufficient) condition for b to be a best reply of the small players to (a φ) is the 39 Indeed, if g1 (a(φ) b(φ)) = 0, then a must be constant in a neighborhood of φ, in which case (d a/dφ)(φ) = 0.
846
E. FAINGOLD AND Y. SANNIKOV
first-order condition def
G(a b φ) = φh2 (a∗ b b)
⎧ ⎨ ≤ 0 b = min B, + (1 − φ)h2 (a b b) = 0 b ∈ (min B max B), ⎩ ≥ 0 b = max B.
Since h22 + h23 < 0 along the diagonal of the small players’ actions, we have G2 < 0 and, hence, G(a b φ) is strictly decreasing in b for each (a φ). This implies that bBR (a φ) is unique for each (a φ). Next, note that for each (a b φ) ∈ A × B × [0 1], (54)
(G(a b∗ φ) − G(a b φ))(b∗ − b) ≤ −K1 |b∗ − b|2
where K1 = min(a b φ) |G2 (a b φ )| > 0. Moreover, G is C 1 and, hence, for each (a φ) ∈ A × [0 1], there is some a¯ ∈ A such that ¯ b φ)(a∗ − a) G(a∗ b φ) − G(a b φ) = G1 (a ¯ b b)(a∗ − a) = (1 − φ)h12 (a and so there is a constant K2 > 0 such that for each (a b φ) ∈ A × B × [0 1], (55)
|G(a∗ b φ) − G(a b φ)| ≤ K2 (1 − φ)|a∗ − a|
Noting that b∗ = bBR (a∗ φ) and letting b = bBR (a φ), the first-order conditions yield G(a∗ b∗ φ)(b − b∗ ) ≤ 0
and G(a b φ)(b∗ − b) ≤ 0
and, hence, (G(a∗ b∗ φ) − G(a b φ))(b∗ − b) ≥ 0 This inequality, together with (54) and (55), implies 0 ≤ (G(a∗ b∗ φ) − G(a b φ))(b∗ − b) = (G(a∗ b∗ φ) − G(a b∗ φ))(b∗ − b) + (G(a b∗ φ) − G(a b φ))(b∗ − b) ≤ K2 (1 − φ)|a∗ − a||b∗ − b| − K1 |b∗ − b|2 and, therefore, |b∗ − b| ≤ (K2 /K1 )(1 − φ)|a∗ − a|
REPUTATION IN CONTINUOUS-TIME GAMES
847
(a), (c), and (d) ⇒ Condition 1. Under assumptions (a) and (c), there is a unique b∗ ∈ B(a∗ ). Then assumption (d) implies that a∗ is not a best reply to b∗ . Moreover, under assumption (c), there is no a = a∗ with μ(a b∗ ) = μ(a∗ b∗ ); hence, Condition 1 follows. Q.E.D. APPENDIX D: APPENDIX FOR SECTION 7 Throughout Appendix D, we maintain Conditions 1 and 4. We write U and L to designate the upper and lower boundaries of the correspondence E , respectively, that is, def
U(φ) = sup E (φ)
def
L(φ) = inf E (φ) for all φ ∈ [0 1]
PROPOSITION D.1: The upper boundary U is a viscosity subsolution of the upper optimality equation on (0 1). PROOF: If U is not a subsolution, there exists q ∈ (0 1) and a C 2 function V : (0 1) → R such that 0 = (V − U ∗ )(q) < (V − U ∗ )(φ) for all φ ∈ (0 1) \ {q} and H∗ (q V (q) V (q)) = H∗ (q U ∗ (q) V (q)) > V (q) Since H∗ is lower semicontinuous, U ∗ is upper semicontinuous, and V > U ∗ on (0 1) \ {q}, there exist ε and δ > 0 small enough such that for all φ ∈ [q − ε q + ε], (56)
H(φ V (φ) − δ V (φ)) > V (φ)
(57)
V (q − ε) − δ > U ∗ (q − ε) ≥ U(q − ε)
and
V (q + ε) − δ > U ∗ (q + ε) ≥ U(q + ε) Figure 7 displays the configuration of functions U ∗ and V − δ. Fix a pair (φ0 W0 ) ∈ Graph E with φ0 ∈ (q − ε q + ε) and W0 > V (φ0 ) − δ. (Such (φ0 W0 ) exist because V (q) = U ∗ (q) and U ∗ is u.s.c.) Let (at b¯ t φt ) be a sequential equilibrium that attains the pair (φ0 W0 ). Denoting by (Wt ) the continuation value of the normal type, we have dWt = r(Wt − g(at b¯ t )) dt + rβt · (dXt − μ(at b¯ t ) dt) for some β ∈ L. Next, we will show that, with positive probability, eventually Wt becomes greater than U(φt ), leading to a contradiction since U is the upper boundary of E .
848
E. FAINGOLD AND Y. SANNIKOV
FIGURE 7.—A viscosity subsolution.
Let Dt = Wt − (V (φt ) − δ). By Itô’s formula, V (φt ) V (φt ) dt + γt V (φt ) dZtn − dV (φt ) = |γt |2 2 1 − φt where γt = γ(at b¯ t φt ), and, hence, dDt = rDt + r(V (φt ) − δ) − rg(at b¯ t ) − |γt |
2
V (φt ) V (φt ) − 2 1 − φt
dt
+ (rβt σ(b¯ t ) − γt V (φt )) dZtn Therefore, so long as Dt ≥ D0 /2, (a) φt cannot exit the interval [q − ε q + ε] by (57), and (b) there exists η > 0 such that either the drift of Dt is greater than rD0 /2 or the norm of the volatility of Dt is greater than η because of inequality (56) using an argument similar to the proof of Lemma C.8.40 This im40 While the required argument is similar to the one used in the proof of Lemma C.8, there are important differences, so we outline them here. First, the functions d and f from Lemma C.8 must be redefined, with the test function V replacing U in the new definition. Second, the de¯ φ β) finition of the compact set Φ also requires change: Φ would now be the set of all (a b with φ ∈ [q − ε q + ε] and |β| ≤ M such that the incentive constraints (42) are satisfied and ¯ φ) ≤ 0. Since φ is bounded away from 0 and 1, the boundary conditions will play no role d(a b here. Finally, note that U is assumed to satisfy the optimality equation in the lemma, while here V satisfies the strict inequality (56). Accordingly, we need to modify the last part of that proof as ¯ φ β) = 0 implies d(a b ¯ φ) > 0; therefore, we have η > 0. follows: f (a b
REPUTATION IN CONTINUOUS-TIME GAMES
849
plies that with positive probability, Dt stays above D0 /2 and eventually reaches any arbitrarily large level. Since payoffs are bounded, this leads to a contradiction. We conclude that U must be a subsolution of the upper optimality equation. Q.E.D. The next lemma is an auxiliary result used in the proof of Proposition D.2 below. LEMMA D.1: The correspondence E of public sequential equilibrium payoffs is convex-valued and has an arc-connected graph. ¯ w ∈ E (p) PROOF: First, we show that E is convex-valued. Fix p ∈ (0 1), w, ¯ Let us show that v ∈ E (p). Consider the¯ set V = with w¯ > w, and v ∈ (w w). {(φ w) | w¯ = α|φ − p|¯+ v}, where α > 0 is chosen large enough so that α|φ − p| + v > U(φ) for all φ sufficiently close to 0 and 1. Let (φt Wt )t≥0 be the belief–continuation value process of a public sequential equilibrium that yields def ¯ Let τ = inf{t > 0 | (φt Wt ) ∈ V}. By Theorem 8 the normal type a payoff of w. specialized to the case with a single behavioral type, we have limt→∞ φt = 0 with probability 1 − p and limt→∞ φt = 1 with probability p; hence, τ < ∞ almost surely.41 If with positive probability, φτ = p, then Wτ = v and, hence, v ∈ E (p), which is the desired result. For the case when φτ = p almost surely, the proof proceeds is two steps. In Step 1, we construct a pair of continuous curves C¯, C ⊂ Graph E |(01) such that their projection on the φ coordinate is the whole ¯ 1) and (0 inf{w | (p w) ∈ C¯} > v > sup{w | (p w) ∈ C } ¯ In Step 2, we use these curves to construct a sequential equilibrium for prior p under which the normal type receives a payoff of v, concluding the proof that E is convex-valued. Step 1. If φτ = p almost surely, then both φτ < p and φτ > p must happen with positive probability by the martingale property. Hence, there exists a continuous curve C ⊂ graph E with end points (p1 w1 ) and (p2 w2 ), with p1 < p < p2 , such that for all (φ w) ∈ C , we have w > v and φ ∈ (p1 p2 ). Fix 0 < ε < p − p1 . We will now construct a continuous curve C ⊂ graph E |(0p1 +ε] that has (p1 w1 ) as an end point and satisfies inf{φ | ∃w s.t. (φ w) ∈ C } = 0. Fix a public sequential equilibrium of the dynamic game with prior p1 that yields the normal type a payoff of w1 . Let Pn denote the probability measure over the sample paths of X induced by the strategy of the normal type. By 41
To be precise, Theorem 8 does not state what happens conditional on the behavioral type. However, in the particular case of a single behavioral type, it is easy to adapt the proof of that theorem (cf. Appendix E) to show that limt→∞ φt = 1 under the behavioral type.
850
E. FAINGOLD AND Y. SANNIKOV
Theorem 8, we have φt → 0 Pn -almost surely. Moreover, since (φt ) is a supermartingale under Pn , the maximal inequality for nonnegative supermartingales yields & % p1 > 0 Pn sup φt ≤ p1 + ε ≥ 1 − p1 + ε t≥0 Therefore, there exists a sample path (φ¯ t W¯ t )t≥0 with the property that φ¯ t ≤ p1 + ε < p for all t and φ¯ t → 0 as t → ∞. Thus, the curve C ⊂ graph E , defined as the image of the path t → (φ¯ t W¯ t ), has (p1 w1 ) as an end point and satisfies inf{φ | ∃w s.t. (φ w) ∈ C } = 0. Similarly, we can construct a continuous curve C ⊂ Graph E |[p2 −ε1) that has (p2 w2 ) as an end point and satisfies sup{φ | ∃w s.t. (φ w) ∈ C } = 1. def We have thus constructed a continuous curve C¯ = C ∪ C ∪ C ⊂ Graph E |(01) which projects onto (0 1) and satisfies inf{w | (p w) ∈ C¯} > v. A similar construction yields a continuous curve C ⊂ Graph E |(01) which projects onto (0 1) and satisfies sup{w | (p w) ∈ C } < v.¯ ¯ a sequential equilibrium for prior p under Step 2. We will now construct ¯ which the normal type receives a payoff of v. Let φ → (a(φ) b(φ)) be a measurable selection from the correspondence of static Bayesian Nash equilibrium. Define (φt )t≥0 as the unique weak solution of ¯ t ) φt )2 /(1 − φt ) + γ(a(φt ) b(φ ¯ t ) φt ) · dZ n dφt = −γ(a(φt ) b(φ t with initial condition φ0 = p.42 Next, let (Wt )t≥0 be the unique solution of the pathwise deterministic differential equation
¯ t )) dt dWt = r Wt − g(a(φt ) b(φ with initial condition W0 = v, up to the stopping time T > 0 when (φt Wt ) first hits either C¯ or C . Define the strategy profile (at b¯ t )t≥0 as follows: for t < T , ¯ ¯ t )); from t = T onward (at b¯ t ) follows a sequential set (at b¯ t ) = (a(φt ) b(φ equilibrium of the game with prior φT . By Theorem 2, (at b¯ t φt )t≥0 must be a sequential equilibrium that yields the normal type a payoff of v. We have thus shown that v ∈ E (p), concluding the proof that E is convex-valued. We now show that Graph(E ), is arc-connected. Fix p < q, v ∈ E (p), and w ∈ E (q). Consider a sequential equilibrium of the game with prior q that yields the normal type a payoff of w. Since φt → 0 under the normal type, there exists a continuous curve C ⊂ graph E with end points (q w) and (p v ) for ¯ t ) φt ) is bounded away from zero when Condition 1 ensures that γ(a(φt ) b(φ ¯ (a(φt ) b(φt )) ∈ N (φt 0), hence standard results for existence and uniqueness of weak solutions apply (Karatzas and Shreve (1991, p. 327)). 42
REPUTATION IN CONTINUOUS-TIME GAMES
851
some v ∈ E (p). Since E is convex-valued, the straight line C connecting (p v) to (p v ) is contained in the graph of E . Hence C ∪ C is a continuous curve connecting (q w) and (p v) which is contained in the graph of E , concluding the proof that E has an arc-connected graph. Q.E.D. PROPOSITION D.2: The upper boundary U is a viscosity supersolution of the upper optimality equation on (0 1). PROOF: If U is not a supersolution, there exists q ∈ (0 1) and a C 2 function V : (0 1) → R such that 0 = (U∗ − V )(q) < (U∗ − V )(φ) for all φ ∈ (0 1) \ {q}, and H ∗ (q V (q) V (q)) = H ∗ (q U∗ (q) V (q)) < V (q) Since H ∗ is upper semicontinuous, U∗ is lower semicontinuous, and U∗ > V on (0 1) \ {q}, there exist ε, δ > 0 small enough such that for all φ ∈ [q − ε q + ε], (58)
H(φ V (φ) + δ V (φ)) < V (φ)
(59)
V (q − ε) + δ < U(q − ε)
and
V (q + ε) + δ < U(q + ε)
Figure 8 displays the configuration of functions U∗ and V + δ. Fix a pair (φ0 W0 ) with φ0 ∈ (q − ε q + ε) and U(φ0 ) < W0 < V (φ0 ) + δ. We will now construct a sequential equilibrium that attains (φ0 W0 ), and this will lead to a contradiction since U(φ0 ) < W0 and U is the upper boundary of E .
FIGURE 8.—A viscosity supersolution.
852
E. FAINGOLD AND Y. SANNIKOV
¯ Let φ → (a(φ) b(φ)) ∈ N (φ φ(1 − φ)V (φ)/r) be a measurable selection of action profiles that minimize V (φ) 2 V (φ) ¯ ¯ (60) − rV (φ) − rg(a b) − |γ(a b φ)| 2 1−φ ¯ ∈ N (φ φ(1 − φ)V (φ)/r) for each φ ∈ (0 1). Define (φt )t≥0 as over all (a b) the unique weak solution of (61)
¯ t ) φt )2 /(1−φt ) dt +γ(a(φt ) b(φ ¯ t ) φt )·dZ n dφt = −γ(a(φt ) b(φ t
on the interval [q − ε q + ε], with initial condition φ0 .43 Next, let (Wt )t≥0 be the unique strong solution of
¯ t )) dt + γ(a(φt ) b(φ ¯ t ) φt )V (φt ) · dZ n dWt = r Wt − g(a(φt ) b(φ (62) t with initial condition W0 , until the stopping time when φt first exits [q − ε q + ε].44 Then, by Itô’s formula, the process Dt = Wt − V (φt ) − δ has zero volatility and drift given by ¯ t )) rDt + rV (φt ) − rg(a(φt ) b(φ 2 V (φt ) V (φt ) ¯ − − γ(a(φt ) b(φt )) φt ) 2 1 − φt ¯ By (58) and the definition of (a(φ) b(φ)), the drift of Dt is strictly negative as long as Dt ≤ 0 and φt ∈ [q − ε q + ε]. (Note that D0 < 0.) Therefore, the process (φt Wt )t≥0 remains under the curve (φt V (φt ) + δ) from time zero onward as long as φt ∈ [q − ε q + ε]. By Lemma D.1, there exists a continuous curve C ⊂ E |[q−εq+ε] that connects the points (q − ε U(q − ε)) and (q + ε U(q + ε)). By (59), the path C and the function V + δ bound a connected region in [q − ε q + ε] × R that contains (φ0 W0 ), as shown in Figure 8. Since the drift of Dt is strictly negative while φt ∈ [q − ε q + ε], the pair (φt Wt ) eventually hits the path C at a stopping time τ < ∞ before φt exits the interval [q − ε q + ε]. We will now construct a sequential equilibrium for prior φ0 under which the normal type receives a payoff of W0 . Consider the strategy profile and be¯ t ) φt ) up to time τ and follow a lief process that coincide with (a(φt ) b(φ sequential equilibrium of the game with prior φτ at all times after τ. Since 43 Existence and uniqueness of a weak solution on a closed subinterval of (0 1) follow from the fact that V must be bounded on such a subinterval and, therefore, γ must be bounded away from zero by Lemma B.1. See Karatzas and Shreve (1991, p. 327). 44 Existence and uniqueness of a strong solution follow from the Lipschitz and linear growth ¯ conditions in W , and the boundedness of γ(a(φ) b(φ) φ)V (φ) on [q − ε q + ε].
REPUTATION IN CONTINUOUS-TIME GAMES
853
¯ t )) ∈ N (φt φt (1 − φt )V (φt )/r), and the processes Wt is bounded, (a(φt ) b(φ (φt )t≥0 and (Wt )t≥0 follow (61) and (62), respectively, Theorem 2 implies that ¯ t ))t≥0 and the belief process (φt )t≥0 form a sethe strategy profile (a(φt ) b(φ quential equilibrium of the game with prior φ0 . It follows that W0 ∈ E (φ0 ), and this is a contradiction since W0 > U(φ0 ). Thus, U must be a supersolution of the upper optimality equation. Q.E.D. LEMMA D.2: Every bounded viscosity solution of the upper optimality equation is locally Lipschitz continuous. PROOF: En route to a contradiction, suppose that U is a bounded viscosity solution that is not locally Lipschitz. That is, for some p ∈ (0 1) and ε ∈ (0 12 ) satisfying [p − 2ε p + 2ε] ⊂ (0 1), the restriction of U to [p − ε p + ε] is not Lipschitz continuous. Let M = sup |U|. By Corollary B.1, there exists K > 0 such that for all (φ u u ) ∈ [p − 2ε p + 2ε] × [−M M] × R, (63)
|H ∗ (φ u u )| ≤ K(1 + |u |2 )
Since the restriction of U to [p − ε p + ε] is not Lipschitz continuous, there exist φ0 , φ1 ∈ [p − ε p + ε] such that |U∗ (φ1 ) − U∗ (φ0 )| (64) ≥ max 1 exp 2M 4K + 1/ε |φ1 − φ0 | Hereafter we assume that φ1 > φ0 and U∗ (φ1 ) > U∗ (φ0 ). The proof for the reciprocal case is similar and will be omitted for brevity. Let V : J → R be the solution of the differential equation (65)
V (φ) = 2K(1 + V (φ)2 )
with initial conditions given by (66)
V (φ1 ) = U∗ (φ1 )
and
V (φ1 ) =
U∗ (φ1 ) − U∗ (φ0 ) φ1 − φ0
where J is the maximal interval of existence around φ1 . We claim that V has the following two properties: (a) There exists φ∗ ∈ J ∩ (p − 2ε p + 2ε) such that V (φ∗ ) = −M and φ∗ < φ0 . In particular, φ0 ∈ J. (b) V (φ0 ) > U∗ (φ0 ). We first prove property (a). For all φ ∈ J such that V (φ) > 1, we have V (φ) < 4KV (φ)2 or, equivalently, (log V ) (φ) < 4KV (φ), which implies (67)
ˆ − V (φ) ˜ > 1 log(V (φ)) ˆ − log(V (φ)) ˜ ˆ φ˜ ∈ J V (φ) ∀φ 4K ˆ > V (φ) ˜ > 1 such that V (φ)
854
E. FAINGOLD AND Y. SANNIKOV
By (64) and (66), we have exists such that (68)
1 4K
log(V (φ1 )) > 2M and, therefore, a unique φ˜ ∈ J
1 ˜ = 2M log(V (φ1 )) − log(V (φ)) 4K
˜ > 1, it follows from (67) that V (φ1 ) − V (φ) ˜ > 2M and so V (φ) ˜ < Since V (φ) ∗ ˜ −M. Since V (φ1 ) > U(φ0 ) ≥ −M, there exists some φ ∈ (φ φ1 ) such that V (φ∗ ) = −M. Moreover, φ∗ must belong to (p − 2ε p + 2ε), because the strict convexity of V implies φ 1 − φ∗ < =
2M 2M V (φ1 ) − V (φ∗ ) < < ∗ ˆ ˆ V (φ ) V (φ) log(V (φ)) 2M < ε log(V (φ1 )) − 8KM
where the equality follows from (68) and the rightmost inequality follows from (64). Finally, we have φ∗ < φ0 ; otherwise, the inequality V (φ∗ ) ≤ U∗ (φ0 ) and the initial conditions (66) would imply V (φ1 ) =
U∗ (φ1 ) − U∗ (φ0 ) V (φ1 ) − V (φ∗ ) ≤ φ1 − φ0 φ 1 − φ∗
which would violate the strict convexity of V . This concludes the proof of property (a). Turning to property (b), the strict convexity of V and the initial conditions (66) imply U∗ (φ1 ) − V (φ0 ) V (φ1 ) − V (φ0 ) = < V (φ1 ) φ1 − φ0 φ 1 − φ0 =
U∗ (φ1 ) − U∗ (φ0 ) φ1 − φ0
and, therefore, V (φ0 ) > U∗ (φ0 ), as claimed. Now define L = max{V (φ) − U∗ (φ) | φ ∈ [φ∗ φ1 ]}. By property (b), we must have L > 0. Let φˆ be a point at which the maximum L is attained. Since V (φ∗ ) = −M and V (φ1 ) = U∗ (φ1 ), we must have φˆ ∈ (φ∗ φ1 ) and, therefore, V − L is a test function satisfying ˆ = V (φ) ˆ − L and U∗ (φ) U∗ (φ) ≥ V (φ) − L
for each φ ∈ (φ∗ φ1 )
Since U is a viscosity supersolution, ˆ ≤ H ∗ (φ ˆ V (φ) ˆ − L V (φ)); ˆ V (φ)
REPUTATION IN CONTINUOUS-TIME GAMES
855
hence, by (63), ˆ ≤ K(1 + V (φ) ˆ 2 ) < 2K(1 + V (φ) ˆ 2 ) V (φ) and this is a contradiction, since by construction V satisfies equation (65). Q.E.D. LEMMA D.3: Every bounded viscosity solution of the upper optimality equation is continuously differentiable with absolutely continuous derivatives. PROOF: Let U : (0 1) → R be a bounded solution of the upper optimality equation. By Lemma D.2, U is locally Lipschitz and hence differentiable almost everywhere. We will now show that U is differentiable everywhere. Fix φ ∈ (0 1). Since U is locally Lipschitz, there exist δ > 0 and k > 0 such that for every p ∈ (φ − δ φ + δ) and every smooth test function V : (φ − δ φ + δ) → R satisfying V (p) = U(p) and V ≥ U, we have |V (p)| ≤ k It follows from Corollary B.1 that there exists some M > 0 such that H(p U(p) V (p)) ≤ M for every p ∈ (φ − δ φ + δ) and every smooth test function V satisfying V ≥ U and V (p) = U(p). Let us now show that for all ε ∈ (0 δ) and ε ∈ (0 ε), ε − ε ε (69) U(φ + ε) + U(φ) −Mε (ε − ε ) < U(φ + ε ) − ε ε < Mε (ε − ε ) If not, for example if the second inequality fails, then we can choose K > 0 such that the C 2 function (a parabola) ε − ε ε ε → f (φ + ε ) = U(φ + ε) + U(φ) + Mε (ε − ε ) + K ε ε is strictly above U(φ + ε ) over (0 ε), except for a tangency point at some ε ∈ (0 ε). But this contradicts the fact that U is a viscosity subsolution, since f (φ + ε ) = −2M < H(φ + ε U(p + ε ) U (φ + ε )). This contradictions proves inequalities (69). It follows from (69) that for all 0 < ε < ε < δ, U(φ + ε ) − U(φ) U(φ + ε) − U(φ) ≤ Mε − ε ε
856
E. FAINGOLD AND Y. SANNIKOV
Thus, as ε converges to 0 from above, (U(φ + ε) − U(φ))/ε converges to a limit U (φ+). Similarly, if ε converges to 0 from below, the quotient above converges to a limit U (φ−). We claim that U (φ+) = U (φ−). Otherwise, if for example U (φ+) > U (φ−) then the function ε → f1 (φ + ε ) = U(φ) + ε
U (φ−) + U (φ+) + Mε2 2
is below U in a neighborhood of φ except for a tangency point at φ. But this leads to a contradiction, because f1 (φ) = 2M > H(φ U(φ) U (φ)) and U is a supersolution. Therefore, U (φ+) = U (φ−) and we conclude that U is differentiable at every φ ∈ (0 1). It remains to show that U is locally Lipschitz. Fix φ ∈ (0 1) and, arguing just as above, choose δ > 0 and M > 0 so that H(p U(p) V (p)) ≤ M for every p ∈ (φ − δ φ + δ) and every smooth test function V satisfying V (p) = U(p) and either V ≥ U or V ≤ U. We affirm that for any p ∈ (φ − δ φ + δ) and ε ∈ (0 δ), |U (p) − U (p + ε)| ≤ 2Mε Otherwise, for example, if U (p + ε) > U (p) + 2Mε for some p ∈ (φ − δ φ + δ) and ε ∈ (0 δ), then the test function ε → f2 (p + ε ) =
ε − ε ε U(p + ε) + U(p) − Mε (ε − ε ) ε ε
must be above U at some ε ∈ (0 ε) (since f2 (p + ε) − f2 (p) = 2Mε). Therefore, there exists a constant K > 0 such that f2 (p + ε ) − K stays below U for ε ∈ [0 ε] except for a tangency at some ε ∈ (0 ε). But then f2 (φ + ε ) = 2M > H(φ + ε U(φ + ε ) U (φ + ε )) contradicting the assumption that U is a viscosity supersolution.
Q.E.D.
PROPOSITION D.3: The upper boundary U is a continuously differentiable function with absolutely continuous derivatives. Moreover, U is the greatest bounded solution of the differential inclusion (70) U (φ) ∈ H(φ U(φ) U (φ)) H ∗ (φ U(φ) U (φ)) a.e. PROOF: First, by Propositions D.1 and D.2, and Lemma D.3, the upper boundary U is a differentiable function with absolutely continuous derivative that solves the differential inclusion (70). If U is not the greatest bounded solution of (70), then there exists another bounded solution V which is strictly
REPUTATION IN CONTINUOUS-TIME GAMES
857
greater than U at some p ∈ (0 1). Choose ε > 0 such that V (p) − ε > U(p). We will show that V (p) − ε is the payoff of a public sequential equilibrium, which is a contradiction since U is the upper boundary. From the inequality V (φ) ≥ H(φ V (φ) V (φ)) a.e. ¯ it follows that a measurable selection φ → (a(φ) b(φ)) ∈ N (φ φ(1 − φ)V (φ)/r) exists such that (71)
¯ rV (φ) − rg(a(φ) b(φ) φ) 2 V (φ) V (φ) ¯ − ≤0 − γ(a(φ) b(φ) φ) 2 1−φ
for almost every φ ∈ (0 1). Let (φt ) be the unique weak solution of ¯ t ) φt )2 /(1 − φt ) + γ(a(φt ) b(φ ¯ t ) φt ) dZ n dφt = −γ(a(φt ) b(φ t with initial condition φ0 = p. Let (Wt ) be the unique strong solution of
¯ t )) dt + V (φt )γ(a(φt ) b(φ ¯ t ) φt ) dZ n dWt = r Wt − g(a(φt ) b(φ t with initial condition W0 = V (p) − ε. Consider the process Dt = Wt − V (φt ). It follows from Itô’s formula for differentiable functions with absolutely continuous derivatives that dDt ¯ t ) φt ) = rDt + rV (φt ) − rg(a(φt ) b(φ dt 2 V (φt ) V (φt ) ¯ − γ(a(φt ) b(φt ) φt ) − 2 1 − φt Therefore, by (71), we have dDt ≤ rDt dt and since D0 = −ε < 0, it follows that Wt −∞. Let τ be the first time that (φt Wt ) hits the graph of U. Consider a strategy profile/belief process that coincides with (at b¯ t φt ) up to time τ and after that follows a public sequential equilibrium of the game with prior φτ with value U(φτ ). Theorem 2 implies that the strategy profile/belief process thus constructed is a sequential equilibrium that yields the large player payoff V (p) − ε > U(p), a contradiction. Q.E.D.
858
E. FAINGOLD AND Y. SANNIKOV
APPENDIX E: PROOF OF THEOREM 8 Consider the closed set ¯ φ β) ∈ A × Δ(B) × ΔK × Rd : IC = {(a b conditions (11) and (12) are satisfied} Then the image of this set under
¯ φ β) → φ0 φ0 σ(b) ¯ −1 (μ(a b) ¯ − μφ (a b)) ¯ rβ σ(b) ¯ (a b is a closed set that does not intersect the line segment (0 1) × {0} × {0} by Condition 1 . Thus, for any ε > 0, there exist constants C > 0 and M > 0 such ¯ φ β) ∈ IC with φ0 ∈ [ε 1 − ε], either that for all (a b ¯ −1 (μ(a b) ¯ − μφ (a b)) ¯ ≥ C or |rβ σ(b)| ¯ ≥ M φ0 σ(b) Fix a public sequential equilibrium (at b¯ t φt )t≥0 with continuation values (Wt )t≥0 for the normal type and consider the evolution of exp(K1 (Wt − g)) + K2 φ20t ¯ while φ0t ∈ [ε 1 − ε], where the constants K1 and K2 > 0 will be determined later. By Itô’s formula, under the probability measure generated by the normal type, the process exp(K1 (Wt − g)) has drift ¯ K1 exp(K1 (Wt − g))r(Wt − g(at b¯ t )) ¯ + K12 exp(K1 (Wt − g))r 2 |β t σ(b¯ t )|2 /2 ¯ Thus, we can guarantee that the drift of exp(K1 (Wt − g)) is greater than or equal to 1 whenever |rβ t σ(b¯ t )| ≥ M by choosing K1 > 0¯such that −K1 r(g¯ − g) + K12 M 2 /2 ≥ 1 ¯ Moreover, the drift of K2 φ20t must always be nonnegative, since under the normal type, φ0t is a submartingale and, hence, K2 φ20t is also a submartingale. Now, even when |rβ t σ(b¯ t )| < M, the drift of exp(K1 (Wt − g)) is still greater ¯ than or equal to −K1 exp(K1 (g¯ − g))r(g¯ − g) ¯ ¯ But in this case we have |φ0t σ(b¯t )−1 (μ(at b¯ t ) − μφ (at b¯ t ))| ≥ C, so the drift of K2 φ20t is greater than or equal to K2 C 2 . Thus, by choosing K2 large enough so that K2 C 2 − K1 exp(K1 (g¯ − g))r(g¯ − g) ≥ 1 ¯ ¯
REPUTATION IN CONTINUOUS-TIME GAMES
859
we can ensure that the drift of exp(K1 (Wt − g)) + K2 φ20t is always greater than 1 while φ0t ∈ [ε 1 − ε]. But since exp(K1 (W¯t − g)) + K2 φ20t must be bounded ¯ eventually exit the interval in a sequential equilibrium, it follows that φ0t must [ε 1 − ε] with probability 1 in any sequential equilibrium. Since ε > 0 is arbitrary, it follows that the bounded submartingale (φ0t )t≥0 must converge to 0 or 1 almost surely, and it cannot converge to 0 with positive probability under the probability measure generated by the normal type. Q.E.D. APPENDIX F: APPENDIX FOR SECTION 9 We begin with the following monotonicity lemma, which will be used throughout this appendix. LEMMA F.1: Fix (φ ζ), (φ ζ ) ∈ [0 1] × R, (a b) ∈ M(φ ζ), and (a b ) ∈ M(φ ζ ). If φ ≤ φ, ζ ≥ ζ, and ζ ≥ 0, then a ≥ a. PROOF: The proof is given in three steps: Step 1: The best reply bBR of the small players, which is single-valued and defined by the fixed-point condition bBR (a φ) = arg max φh(a∗ b bBR (a φ)) b
+ (1 − φ)h(a∗ b bBR (a φ)) is increasing in a and φ. For each (a φ) ∈ A × [0 1], any pure action b ∈ B which is a best reply for the small players to (a φ) must satisfy the first-order condition (φh2 (a∗ b b) + (1 − φ)h2 (a b b))(bˆ − b) ≤ 0 ∀bˆ ∈ B Since h22 + h23 < 0 along the diagonal of the small players’ actions, for each fixed (a φ), the function b → φh2 (a∗ b b) + (1 − φ)h2 (a b b) is strictly decreasing; hence, the best reply of the small players is unique for all (a φ), as claimed. To see that bBR is increasing, let a ≥ a, φ ≥ φ and suppose, toward a contradiction, that b = bBR (a φ ) < bBR (a φ) = b. By the first-order conditions, (φh2 (a∗ b b) + (1 − φ)h2 (a b b))(b − b) ≤ 0 <0
and (φ h2 (a∗ b b ) + (1 − φ )h2 (a b b ))(b − b) ≤ 0; >0
860
E. FAINGOLD AND Y. SANNIKOV
hence, φh2 (a∗ b b) + (1 − φ)h2 (a b b) ≥ 0 ≥ φ h2 (a∗ b b ) + (1 − φ )h2 (a b b ) which is a contradiction since we have assumed h12 ≥ 0 and h22 + h23 < 0 along the diagonal of small players’ actions. def Step 2: For each (b ζ) ∈ B × R, define BR(b ζ) = arg maxa g(a b) + ζλ(a ).45 Then, for all (a b ζ) and (a b ζ ) ∈ A × B × R with a ∈ BR(b ζ) and a ∈ BR(b ζ ), [b ≤ b ζ ≥ ζ and ζ ≥ 0]
⇒
a ≥ a
If a ∈ BR(b ζ) and a ∈ BR(b ζ ), then the first-order conditions imply (g1 (a b) + ζλ (a))(a − a) ≤ 0 and (g1 (a b ) + ζ λ (a ))(a − a ) ≤ 0 Toward a contradiction, suppose b ≤ b, ζ ≥ max{0 ζ}, and a < a. Then the inequalities above imply g1 (a b) + ζλ (a) ≥ 0 ≥ g1 (a b ) + ζ λ (a ) But this is a contradiction, since we have g1 (a b ) > g1 (a b) by g11 < 0 and g12 ≤ 0, and also ζ λ (a ) ≥ ζ λ (a) ≥ ζλ (a) by λ ≤ 0, λ > 0, and ζ ≥ 0. Step 3: If (a b) ∈ M(φ ζ), (a b ) ∈ M(φ ζ ), φ ≤ φ, and ζ ≥ max{0 ζ}, then a ≥ a. Suppose not, that is, (a b) ∈ M(φ ζ), (a b ) ∈ M(φ ζ ), φ ≤ φ, ζ ≥ max{0 ζ}, and a < a. Then we must have b ≤ b by Step 1 and b > b Q.E.D. by Step 2, a contradiction. Therefore, a ≥ a, as claimed. We are now ready to prove Lemma 2, the continuity lemma used in the proof of Theorem 10. PROOF OF LEMMA 2: Fix an arbitrary constant M > 0. Consider the set Φ of all tuples (a b ζ) ∈ A × B × R that satisfy (72)
a ∈ arg max g(a b) + ζλ(a ) a ∈A
b ∈ arg max h(a b b) b ∈B
g(a b) + ζλ(a) ≥ g(aN bN ) + ε and ζ ≤ M. We claim that for all (a b ζ) ∈ Φ, we have ζ > 0. Otherwise, if ζ ≤ 0 for some (a b ζ) ∈ Φ, then a ≤ aN by Lemma F.1 and, therefore, b ≤ bN , since 45
Note that BR may not be single-valued when ζ < 0.
REPUTATION IN CONTINUOUS-TIME GAMES
861
the small players’ best reply is increasing in the large player’s action, as shown in Step 1 of the proof of Lemma F.1. But since (aN bN ) is a Nash equilibrium and g2 ≥ 0, it follows that g(a b) + ζλ(a) ≤ g(a b) ≤ g(a bN ) ≤ g(aN bN ) < g(aN bN ) + ε, and this is a contradiction since we have assumed that (a b ζ) satisfy (72). Thus, Φ is a compact set and the continuous function (a b ζ) → ζ achieves its minimum, ζ0 , on Φ. Moreover, we must have ζ0 > 0 by the argument in the def previous paragraph. It follows that ζ ≥ δ = min{M ζ0 } for any (a b ζ) that satisfy conditions (72). Q.E.D. Turning to the proof of Proposition 9, since it is is similar to the proof of Proposition 4, we only provide a sketch. PROOF OF PROPOSITION 9—SKETCH: As in Appendix C.6, we use a firstorder condition to characterize the action profile (a b) ∈ M(φ (V (φ + φ(a)) − V (φ))/r) for each (φ V ) ∈ (0 1) × C inc ([0 1]). To express this condition, we define a function G : A × B × [0 1] × C inc ([0 1]) → R2 as
def (73) G(a b φ V ) = g1 (a b) + λ (a) V (φ + φ(a)) − V (φ) /r φh2 (a∗ b b) + (1 − φ)h2 (a b b) Thus, the first-order necessary and sufficient condition for (a b) ∈ M(φ (V (φ + φ(a)) − V (φ))/r) can be stated as G(a b φ V ) · (aˆ − a bˆ − b) ≤ 0
ˆ ∈ A × B ˆ b) ∀(a
Next, using an argument very similar to the proof of Lemma C.11, we can show the following conditions: (A) ∃L > 0 such that ∀(a b φ) ∈ A × B ∈ (0 1) and V , Vˆ ∈ C inc ([0 1]), |G(a b φ Vˆ ) − G(a b φ V )| ≤ Ld∞ (Vˆ V ) where d∞ is the supremum distance on C inc ([0 1]). ˆ ∈ ˆ b) (B) ∃M > 0 such that ∀(φ V ) ∈ (0 1) × C inc ([0 1]) and (a b), (a A × B, ˆ φ V ) − G(a b φ V )) · (aˆ − a bˆ − b) ˆ b (G(a ≤ −M(|aˆ − a|2 + |bˆ − b|2 ) Finally, with conditions (A) and (B) above in place, we can follow the steps of the proof of Proposition 4 (with the mapping G replacing F ) and prove the desired result. Q.E.D. Next, toward the proof of Theorem 11, we first show that the optimality equation has a bounded increasing solution.
862
E. FAINGOLD AND Y. SANNIKOV
PROPOSITION F.1: There exists a bounded increasing continuous function U : (0 1) → R which solves the optimality equation (36) on (0 1). The proof of Proposition F.1 relies on a series of lemmas. LEMMA F.2: For every ε > 0 and every compact set K ⊂ C inc ([ε 1]), there exists c > 0 such that for all (φ V ) ∈ [ε 1 − ε] × K, φ(a(φ V )) ≥ c PROOF: By Proposition 9, the function (φ V ) → φ(a(φ V )) is continuous on the domain [ε 1 − ε] × C inc ([ε 1]) and, hence, it achieves its minimum on the compact set [ε 1 − ε] × K at some point (φ0 V0 ). Thus, letting def c = φ0 (a(φ0 V0 )) yields the desired result. Indeed, we must have c ≥ 0 since λ is increasing. Moreover, c could be zero only if a(φ0 V0 ) = a∗ , in which case V0 (φ0 ) = 0 and hence a∗ would have to be part of a static Nash equilibrium of the complete information game, which is ruled out by assumption (c). Q.E.D. For ease of notation, for each continuous and bounded function V : (0 1) → R and φ ∈ (0 1), let H(φ V ) denote the right-hand side of the optimality equation, that is, def
H(φ V ) =
rg(a(φ V ) b(φ V )) + λ(a(φ V ))V (φ) − rV (φ) λφ (a(φ V ))φ(a(φ V ))
¯ and ε > 0, consider the following initial value probNext, for each α ∈ [g g] ¯ modified version of the optimality equation: lem (IVP) for a suitably PROBLEM—IVP(ε α): Find a real-valued, continuous function U defined on an interval [φα 1], with φα < 1 − ε, such that46 (74)
U (φ) = max{0 H(φ U)} ∀φ ∈ [φα 1 − ε) U(φ) = α
∀φ ∈ [1 − ε 1]
With this definition in place we can state the next lemma: ¯ a unique solution of the IVP(ε α) LEMMA F.3: For every ε > 0 and α ∈ [g g], ¯ exists on an interval [φα 1], with φα < 1¯− ε. Moreover, if for some α0 ∈ [g g], ¯ a unique solution of the IVP(ε α0 ) exists on [φ0 1], then for every α in a neighborhood of α0 , a unique solution Uεα of the IVP(ε α) exists on the same interval [φ0 1] and α → Uεα is a continuous function under the supremum metric. 46 Here our convention is that U (φ) is the left derivative of U at φ. But since the right-hand side is continuous in φ, a solution can fail to be differentiable only at φ = 1 − ε, at which point the left derivative can be different from the right derivative, which is identically zero given the initial condition.
REPUTATION IN CONTINUOUS-TIME GAMES
863
PROOF: We will apply Theorems 2.2 and 2.3 from Hale and Lunel (1993, pp. 43–44), which provide sufficient conditions under which an initial value problem for a retarded functional differential equation locally admits a unique solution, which is continuous in initial conditions. By Proposition 9 and the fact that φ(a(φ V )) > 0 for all (φ V ) ∈ [ε 1 − ε] × C inc ([ε 1]), the function max{0 H} is continuous on [ε 1 − ε] × C inc ([ε 1]). Moreover, by Proposition 9 and Lemma F.2, for every compact set K ⊂ C inc ([ε 1]) and every φ ∈ [ε 1 − ε], the function V → max{0 H(φ V )} is Lipschitz continuous on K with a Lipschitz constant which is uniform in φ ∈ [ε 1 − ε]. We have thus shown that max{0 H}—the right-hand side of the retarded functional differential equation (74)—satisfies all the conditions of Theorems 2.2 and 2.3 from Hale and Lunel (1993, pp. 43–44), and, therefore, the IVP(ε α) admits a local solution which is unique and continuous in the initial condition α. Q.E.D. Next, we show that for a suitable choice of the initial condition α, the unique solution to the IVP(ε α) exists on the entire interval [ε 1], takes values in the ¯ and solves the optimality equation on [ε 1 − ε]. set [g g], ¯ LEMMA F.4: For every ε > 0 there exists a continuous increasing function ¯ that solves the optimality equation (36) on [ε 1 − ε] and is U : [ε 1] → [g g] constant on [1¯ − ε 1]. ¯ write Uα to designate the unique PROOF: Fix ε > 0. For each α ∈ [g g], ¯ 1] be its maximal interval of existence. solution of the IVP(ε α), and let Iα ⊂ (0 Consider the disjoint sets ε ¯ : Uα (φ) ≤ g for some φ ∈ Iα ∩ J1 = α ∈ [g g] 1 − ε 2 ¯ ¯ and
⎧ ⎫ ε ⎪ ⎪ ⎪ ⎪ I 1 ⊃ ⎪ ⎪ α ⎪ ⎪ ⎪ ⎪ 2 ⎪ ⎪ ⎨ ⎬ ε ¯ : Uα (φ) > g for all φ ∈ 1 J2 = α ∈ [g g] ⎪ 2 ⎪ ⎪ ⎪ ¯ ¯ ⎪ ⎪ ⎪ ⎪ ε ⎪ ⎪ ⎪ H(φ Uα ) < 0 for some φ ∈ 1 − ε ⎪ ⎩ ⎭ 2
¯ Since J1 We will show that J1 and J2 are nonempty open sets relative to [g g]. ¯ \ (J1 ∪ J2 ). and J2 are disjoint, we will conclude that there exists some α ∈ [g¯ g] ¯ ( ε 1], takes For this α, the solution to the IVP(ε α), Uα , is well defined on 2 ¯ and satisfies H(φ Uα ) ≥ 0 for all φ ∈ ( ε2 1 − ε]. The latter values in [g g], implies that¯ Uα solves the optimality equation on ( ε2 1 − ε].
864
E. FAINGOLD AND Y. SANNIKOV
¯ Clearly, J2 is open in Let us show that J1 and J2 are open relative to [g g]. ¯ ¯ because α → Uα and α → H(· Uα ) are continuous [g g] functions by Propo¯ ¯ because for each α ∈ J1 we sition 9 and Lemma F.3. Also J1 is open in [g g], ¯ and, therefore, have Uα (φ0 ) ≤ g for some φ0 ∈ Iα ∩ (ε 1 − ε], ¯ >0
H(φ0 Uα ) = r g(a(φ0 Uα ) b(φ0 Uα )) − Uα (φ0 ) ≥0
+ λ(a(φ0 Uα )) Uα (φ0 )
/ λφ (a(φ0 Uα )) φ(a(φ0 Uα )) > 0 >0
which implies Uα (φ0 ) > 0 by (74). Hence, there exists φ1 ∈ ( ε2 φ0 ) with Uα (φ1 ) < g, and so, by the continuity of α˜ → Uα˜ , we must have α˜ ∈ J1 for ¯ all α˜ in a neighborhood of α, as was to be shown. It remains to show that J1 and J2 are nonempty sets. It is clear that g ∈ J1 . As for J2 , note that the constant function U¯ = g¯ solves the equa¯ ¯ To see this, observe that U¯ = 0 identically tion with initial condition g. ¯ ¯ ¯ < 0; hence, U¯ (φ) = 0 = ¯ so H(φ U) and that g(a(φ U) b(φ U)) < g, ¯ max{0 H(φ U)}. Therefore, g¯ ∈ J2 . Q.E.D. LEMMA F.5: For every V ∈ C inc ([0 1]) and every φ and φ ∈ (0 1) with φ < φ, φ (a(φ V )) < φ(a(φ V )) + φ − φ PROOF: Fix V ∈ C inc ([0 1]) and φ, φ ∈ (0 1) with φ < φ. Suppose, toward a contradiction, that φ + φ (a(φ V )) ≥ φ + φ(a(φ V )). Since λ is strictly ˆ a) ˆ a) ˆ → φˆ + φ( ˆ is strictly increasing in φˆ and increasing, the function (φ ˆ and, therefore, we must have a(φ V ) < a(φ V ). It strictly decreasing in a, follows from Lemma F.1 that V (φ ) < V (φ). But since V is increasing, we have V (φ ) ≤ V (φ) and V (φ + φ (a(φ V ))) ≥ V (φ + φ(a(φ V ))), and, therefore, ΔV (φ ) ≥ ΔV (φ), a contradiction. Q.E.D. LEMMA F.6: For every ε > 0, there exists K > 0 such that for every continuous, ¯ and every φ, φ ∈ [ε 1 − ε] with φ < φ, if increasing function V : [ε 1] → [g g] ¯ a(φ V ) = min A and a(φ V ) = min A, then rg(a(φ V ) b(φ V )) + λ(a(φ V ))V (φ ) ≥ rg(a(φ V ) b(φ V )) + λ(a(φ V ))V (φ) − K(φ − φ ) PROOF: Let ε > 0 be fixed. The proof proceeds in four steps:
REPUTATION IN CONTINUOUS-TIME GAMES
865
Step 1: There exists a constant Ka > 0 such that for all V ∈ C inc ([0 1]) and all φ, φ ∈ [ε 1 − ε] with φ < φ, a(φ V ) ≥ a(φ V ) − Ka (φ − φ )
By the definition of φ(a) and Lemma F.5, for all ε ≤ φ < φ ≤ 1 − ε, λ(a(φ V )) − λ(a(φ V ))
=
λφ (a(φ V ))φ (a(φ V )) λφ (a(φ V ))φ(a(φ V )) − φ (1 − φ ) φ(1 − φ) ≤λ¯
=
≤φ−φ by Lemma F5 φ(1 − φ)λφ (a(φ V ))(φ (a(φ V )) − φ(a(φ V ))) φ (1 − φ )φ(1 − φ) ≥ε4 ≤λ¯
≤φ−φ
λφ (a(φ V ))φ(a(φ V ))(φ(1 − φ) − φ (1 − φ )) + φ (1 − φ )φ(1 − φ) ≥ε4 ≤λ(a(φ V ))−λ(a(φV ))
≤1 φ(1 − φ)φ(a(φ V ))(λφ (a(φ V )) − λφ (a(φ V ))) + φ (1 − φ )φ(1 − φ)
¯ −4 (φ − φ ) − ε ≤ 2λε
−4
≥ε4
λ(a(φ V )) − λ(a(φ V ))
def where λ¯ = maxa∈A λ(a). Hence,
¯ + ε4 )−1 (φ − φ ) λ(a(φ V )) − λ(a(φ V )) ≤ 2λ(1 and, therefore, by the mean value theorem, a(φ V ) − a(φ V ) ≤
¯ 2λ(φ − φ ) mina λ (a)(1 + ε4 ) def
¯
2λ which proves the desired inequality with Ka = mina λ (a)(1+ε 4) . Step 2: There exists a constant Kb > 0 such that for all V ∈ C inc ([0 1]) and all φ φ ∈ [ε 1 − ε] with φ < φ,
b(φ V ) ≥ b(φ V ) − Kb (φ − φ )
866
E. FAINGOLD AND Y. SANNIKOV
First, note that the best reply of the small players, (a φ) → bBR (a φ), is increasing and Lipschitz continuous.47 Therefore, there exist constants c1 and c2 > 0 such that for all φ, φ ∈ (0 1) with φ ≤ φ and all a, a ∈ A with a ≤ a, 0 ≤ bBR (a φ) − bBR (a φ ) ≤ c1 (a − a ) + c2 (φ − φ ) Hence, for all V ∈ C inc ([0 1]) and all φ, φ ∈ [ε 1 − ε] with φ < φ, if a(φ V ) − a(φ V ) ≥ 0, then b(φ V ) − b(φ V ) ≤ c1 (a(φ V ) − a(φ V )) + c2 (φ − φ )
≤ c1 Ka (φ − φ ) + c2 (φ − φ ) = (c1 Ka + c2 )(φ − φ ) where Ka is the constant from Step 1. Moreover, if a(φ V ) − a(φ V ) < 0, then b(φ V ) − b(φ V ) ≤ bBR (a(φ V ) φ) − bBR (a(φ V ) φ )
≤ c2 (φ − φ ) ≤ (c1 Ka + c2 )(φ − φ ) def
which thus proves the desired inequality with Kb = c1 Ka + c2 . Step 3: For each (a b) ∈ A × B and each ζ ≥ 0, if a = arg maxa g(a b) + ζλ(a ) ∈ (min A max A), then ζ = −g1 (a b)/λ (a): This follows directly from the first-order condition when a ∈ (min A max A). Step 4: There exists K > 0 such that for every φ and φ ∈ [ε 1 − ε] with φ < φ and every V ∈ C inc ([ε 1]), if (a b) = (a(φ V ) b(φ V )) with a = min A, and (a b ) = (a(φ V ) b(φ V )) with a = min A, then rg(a b) + λ(a)V (φ) ≥ rg(a b ) + λ(a )V (φ ) − K(φ − φ ) First note that a = a∗ = max A; otherwise, we would have V (φ) = 0 and, therefore, (a∗ b) would be a static Nash equilibrium of the complete information game, which is ruled out by condition (d). Likewise, a = a∗ . Since by assumption we also have a = min A and a = min A, we can apply Step 3 to conclude that V (φ)/r = ζ(a b) and
V (φ )/r = ζ(a b )
where the function ζ : A × B → (0 ∞) is such that ˆ = −g1 (a ˆ ˆ b) ˆ b)/λ ˆ ζ(a (a)
ˆ ∈ A × B ˆ b) ∀(a
47 The monotonicity of bBR is Step 2 from the proof of Lemma F.1. The Lipschitz continuity is a straightforward implication of the first-order condition and the facts that h12 (a b b) is bounded from above and (h22 + h23 )(a b b) is bounded away from zero.
867
REPUTATION IN CONTINUOUS-TIME GAMES
so it satisfies ζ1 > 0 and ζ2 ≥ 0, since g11 < 0, g12 ≤ 0, λ > 0, and λ ≤ 0. Since (a b ) ∈ M(φ ζ(a b )), we have g(a b ) + λ(a)ζ(a b ) ≤ g(a b ) + λ(a )ζ(a b ) and, therefore, g(a b) + λ(a)ζ(a b) − (g(a b ) + λ(a )ζ(a b )) = g(a b) + λ(a)ζ(a b) − (g(a b ) + λ(a)ζ(a b )) + g(a b ) + λ(a)ζ(a b ) − (g(a b ) + λ(a )ζ(a b )) ≤ g(a b) + λ(a)ζ(a b) − (g(a b ) + λ(a)ζ(a b )) = g(a b) + λ(a)ζ(a b) − (g(a b ) + λ(a)ζ(a b )) + λ(a)(ζ(a b ) − ζ(a b )) ¯ we have ¯ b), Hence, for some (a g(a b) + λ(a)ζ(a b) − (g(a b ) + λ(a )ζ(a b )) ¯ + λ(a)ζ2 (a b))(b ¯ ¯ b )(a − a ) − b ) + λ(a)ζ1 (a ≤ (g2 (a b) ≤ c3 (b − b ) + c4 (a − a ) ˆ + λ(a)ζ ˆ > 0 and c4 = max ˆ λ(a)ζ ˆ > ˆ b) ˆ 2 (a ˆ b) ˆ 1 (a ˆ b) where c3 = max(aˆ b)ˆ g2 (a ˆ b) (a 0. It follows from Steps 1 and 2 that def
def
g(a b) + λ(a)ζ(a b) ≤ g(a b ) + λ(a )ζ(a b ) + (c3 Kb + c4 Ka )(φ − φ ) def
which is the desired result with K = r(c3 Kb + c4 Ka ).
Q.E.D.
LEMMA F.7: For every ε > 0 there exists R > 0 such that for every continuous, ¯ that solves the optimality equation (36) on increasing function U : [ε 1] → [g g] ¯ [ε 1 − ε] and is constant on [1 − ε 1], |U (φ)| ≤ R ∀φ ∈ [ε 1 − ε] PROOF: Fix ε > 0. We begin with the definition of the upper bound R. Since / M(φ 0) for all (b φ) ∈ B × [0 1], there exists 0 < η < a∗ − min A (a∗ b) ∈ such that for all (a b φ ζ) ∈ A × B × [0 1] × [0 g¯ − g] with (a b) ∈ M(φ ζ), ¯ (75) a ≥ a∗ − η ⇒ ζ ≥ η
868
E. FAINGOLD AND Y. SANNIKOV def
Let K > 0 designate the constant from Lemma F.6 and define λ = mina λ(a). λη ¯ a ∈ A, Next choose 0 < δ < 2K ¯ small enough that for all φ ∈ [ε 1 − ε] and (76)
φ(a) ≤ δ
⇒
a ≥ a∗ − η
def ¯ g¯ − g)/λ and choose R > 2C/δ large enough that Let C = (r + λ)( ¯ ¯ C λη (77) > g¯ − g ¯ ¯ log δ − log R 2λ ¯
Now suppose the thesis of the lemma were false, that is, maxφ∈[ε1−ε] U (φ) > ¯ which solves R for some continuous, increasing function U : [ε 1] → [g g] the optimality equation on [ε 1 − ε] and is constant on [1¯ − ε 1]. Let φ0 ∈ arg maxφ∈[ε1−ε] U (φ). By the optimality equation, R < U (φ0 ) ≤
¯ g¯ − g) (r + λ)( C ¯ = λφ0 (a(φ0 U)) φ0 (a(φ0 U)) ¯
which yields (78)
φ0 (a(φ0 U)) <
C δ < ; R 2
hence a(φ0 U) > a∗ − η by (76), which implies (79)
U(φ0 ) ≥ η
by (75). To conclude the proof, it is enough to show that (80)
U (φ) ≥
λη ¯ C + φ0 − φ 2λ¯ R
∀φ ∈ [φ0 − δ φ0 ]
implies
φ0
λη dφ ¯ C φ0 −δ ¯ + φ0 − φ 2λ R C λη C + δ − log > g¯ − g = ¯ log ¯ R R 2λ ¯
U(φ0 ) − U(φ0 − δ) ≥
¯ by (77), and this is a contradiction since U takes values in [g g]. ¯
REPUTATION IN CONTINUOUS-TIME GAMES
869
To prove inequality (80), first recall that U solves the optimality equation on [ε 1 − ε], that is, (81)
rg(a(φ U) b(φ U)) + λ(a(φ U))U(φ) − rU(φ) λφ (a(φ U))φ(a(φ U)) ∀φ ∈ [ε 1 − ε]
U (φ) =
Hence, by Lemmas F.5 and F.6 and inequality (78), for all φ ∈ [φ0 − δ φ0 ], ≥λφ0 (a(φ0 U))U (φ0 ) by (81)
¯
U (φ) ≥ rg(a(φ0 U) b(φ0 U)) + λ(a(φ0 U))U(φ0 ) − rU(φ0 ) ≤Kδ<λη/2
¯ − K(φ0 − φ)
/ λ¯ · φ0 (a(φ0 U)) +φ0 − φ ≤C/R
λη λU (φ0 )φ0 (a(φ0 U)) − ¯ 2 ≥ ¯ C + φ0 − φ λ¯ R Thus, to prove inequality (80) and conclude the proof of the lemma, it suffices to show that U (φ0 )φ0 (a(φ0 U)) ≥ η Indeed, by the mean value theorem and the fact that U is constant on [1 − ε 1], ¯ ¯ we must have U(φ0 ) ≤ U (φ)φ 0 (a(φ0 U)) for some φ ∈ [φ0 1−ε]. Hence, U(φ0 ) ≤ U (φ0 )φ0 (a(φ0 U)) since U (φ0 ) = max{U (φ) : φ ∈ [ε 1 − ε]}, Q.E.D. and, therefore, U (φ0 )φ0 (a(φ0 U))) ≥ η follows from (79). We are now ready to prove Proposition F.1. PROOF OF PROPOSITION F.1: Our method of proof is to construct a solution ¯ as a limit of a sequence of solutions on expanding closed U : (0 1) → [g g] subintervals of¯ (0 1). Using Lemma F.4, for each n ≥ 1 there exists a continu¯ that solves the optimality equation ous, increasing function Un : [ n1 1] → [g g] 1 1 ¯ 1 − ]. Since for m ≥ n the restriction of Um to [ n1 1] solves the option [ n n mality equation on [ n1 1 − n1 ], by Lemma F.7, the derivative of Um is uniformly bounded for m ≥ n, and so the sequence (Um )m≥n is uniformly bounded and equicontinuous over the domain [ n1 1 − n1 ]. By the Arzelà–Ascoli theorem, for every n there exists a subsequence of (Um )m≥n that converges uniformly on
870
E. FAINGOLD AND Y. SANNIKOV
[ n1 1 − n1 ]. Hence, by a standard diagonalization argument, we can find a subsequence (Unk )k≥1 that converges pointwise to a continuous, increasing func¯ such that the convergence is uniform on every compact tion U : (0 1) → [g g] subset of (0 1). ¯ It remains to show that U solves the equation on (0 1). If we show that Un k (φ) → H(φ U) uniformly on any closed subinterval [φ0 φ1 ] ⊂ (0 1), it will then follow that U is differentiable and U (φ) = H(φ U). First note that from any φ ∈ [φ0 φ1 ], the posterior cannot jump above φ2 = φ1 + φ1 (1 − ¯ λ λ− φ1 ) λ ¯ . Since Proposition 9 and Lemma F.2 imply that V → H(φ V ) is ¯ Lipschitz continuous on the compact set {U|[φ0 φ2 ]} ∪ {Unk |[φ0 φ2 ] : k ≥ 1} with a Lipschitz constant that is uniform in φ ∈ [φ0 φ1 ] and since the sequence (Unk )k≥1 converges to U uniformly on [φ0 φ2 ], it follows that Un k (φ) = H(φ Unk ) converges to H(φ U) uniformly on [φ0 φ1 ], as required. Q.E.D. The following lemma concerns boundary conditions. LEMMA F.8: Every bounded increasing solution of the optimality equation U : (0 1) → R satisfies the following conditions at p ∈ {0 1}: lim U(φ) = g(M(p 0)) and
φ→p
lim φ(1 − φ)U (φ) = 0
φ→p
PROOF: Let us show that limφ→0 U(φ) = g(M(0 0)). First, by continuity, as φ → 0, we have U(φ) = U(φ + φ(a(φ U))) − U(φ) → 0 and, hence, (a(φ U) b(φ U)) → M(0 0) = (aN bN ). Therefore, if limφ→0 U(φ) < g(aN bN ), then the optimality equation implies r g(aN bN ) − lim U(φ) φ→0 > 0 lim φU (φ) = ∗ φ→0 λ(a ) − λ(aN ) Thus, for 0 < c < r(g(aN bN ) − limφ→0 U(φ))/(λ(a∗ ) − λ(aN )), we must have U (φ) > c/φ for all φ sufficiently close to 0. But then U cannot be bounded, since the antiderivative of c/φ, which is c log φ, tends to −∞ as φ → 0, and this is a contradiction. Thus, we must have limφ→0 U(φ) ≥ g(aN bN ). Moreover, if limφ→0 U(φ) > g(aN bN ), then a similar argument can be used to show that limφ→0 φU (φ) < 0, which is impossible since U is increasing. We have thus shown that limφ→0 U(φ) = g(M(0 0)). The proof for the φ → 1 case is analogous. Finally, the condition limφ→p φ(1 − φ)U (φ) = 0 follows from the optimality equation and the boundary condition limφ→p U(φ) = g(M(p 0)). Q.E.D. PROPOSITION F.2: The optimality equation (36) has a unique bounded increasing solution on (0 1).
REPUTATION IN CONTINUOUS-TIME GAMES
871
PROOF: By Proposition F.1, the optimality equation has at least one bounded increasing solution. Suppose U and V are two such solutions. Assuming that U(φ) > V (φ) for some φ ∈ (0 1), let φ0 ∈ (0 1) be the point where the difference U − V is maximized, which is well defined because limφ→p U(φ) = limφ→p V (φ) for p ∈ {0 1} by Lemma F.8. Thus, we have U(φ0 ) − V (φ0 ) > 0 and U (φ0 ) − V (φ0 ) = 0. Let
def U(φ0 ) = U φ0 + φ0 (a(φ0 U)) − U(φ0 )
def V (φ0 ) = V φ0 + φ0 (a(φ0 V )) − V (φ0 ) We claim that U(φ0 ) > V (φ0 ). Otherwise, if U(φ0 ) ≤ V (φ0 ), then
a(φ0 U) ≤ a(φ0 V ) by Lemma F.1 and, hence, b(φ0 U) ≤ b(φ0 V ) by Step 1
of the proof of Lemma F.1. Therefore, rg(a(φ0 U) b(φ0 U)) + λ(a(φ0 U))U(φ0 ) ≤ rg(a(φ0 U) b(φ0 V )) + λ(a(φ0 U))V (φ0 ) ≤ rg(a(φ0 V ) b(φ0 V )) + λ(a(φ0 V ))V (φ0 ) where the first inequality uses g2 ≥ 0 and the second inequality follows from the fact that (a(φ0 V ) b(φ0 V )) ∈ M(φ0 V (φ0 )/r). Then the optimality equation implies U (φ0 ) =
rg(a(φ0 U) b(φ0 U)) + λ(a(φ0 U))U(φ0 ) − rU(φ0 ) φ0 (1 − φ0 )(λ(a∗ ) − λ(a(φ0 U)))
<
rg(a(φ0 V ) b(φ0 V )) + λ(a(φ0 V ))V (φ0 ) − rV (φ0 ) φ0 (1 − φ0 )(λ(a∗ ) − λ(a(φ0 V )))
= V (φ0 ) which is a contradiction. Thus, U(φ0 ) > V (φ0 ), as claimed. It follows from U(φ0 ) > V (φ0 ) that a(φ0 U) ≥ a(φ0 V ), by Lemma F.1. Hence, φ0 (a(φ0 U)) ≤ φ0 (a(φ0 V )) and since V is increasing,
U φ0 + φ0 (a(φ0 U)) − V φ0 + φ0 (a(φ0 U))
≥ U φ0 + φ0 (a(φ0 U)) − V φ0 + φ0 (a(φ0 V )) > U(φ0 ) − V (φ0 ) where the last inequality follows from U(φ0 ) > V (φ0 ). But this is a contradiction, for we picked φ0 to be the point where the difference U − V is maximized. Thus, the optimality equation has a unique bounded increasing solution. Q.E.D.
872
E. FAINGOLD AND Y. SANNIKOV
We are now ready to prove Theorem 11. PROOF OF THEOREM 11: By Proposition F.2, the optimality equation has a unique bounded solution U : (0 1) → R. By Lemma F.8, such solution satisfies the boundary conditions. The proof proceeds in two steps. In Step 1 we show that, given any prior φ0 ∈ (0 1) on the behavioral type, there is no sequential equilibrium that yields a payoff different from U(φ0 ) to the nordef mal type. In Step 2 we show that the Markovian strategy profile (amt bmt ) = (a(φt− U) b(φt− U)) is the unique sequential equilibrium that yields the payoff U(φ0 ) to the normal type. Step 1. Fix a prior φ0 ∈ (0 1) and a sequential equilibrium (at b¯ t φt )t≥0 . By Theorem 9, the belief process (φt )t≥0 solves dφt = −λφt− (at )φt− (at ) dt + φt− (at ) dNt and the process (Wt )t≥0 of continuation values of the normal type solves (82)
dWt = r(Wt− − g(at b¯ t ) − ζt λ(at )) dt + rζt dNt
for some predictable process (ζt )t≥0 such that (83)
(at b¯ t ) ∈ M(φt− ζt ) almost everywhere def
Thus, the process Ut = U(φt ) satisfies (84)
dUt = −λφt− (at )φt− (at )U (φt− ) dt + Ut dNt
where def
Ut = U(φt− + φt− (at )) − U(φt− ) def
We will now demonstrate that D0 = W0 − U(φ0 ) = 0 by showing that if D0 = def 0, then the process Dt = Wt − U(φt ) must eventually grow arbitrarily large with positive probability, which is a contradiction since U and W are both bounded. Without loss of generality, we assume D0 > 0. It follows from (82), (84), and the optimality equation (36) that the process (Dt )t≥0 jumps by rζt − Ut when a Poisson event arrives and has drift given by rDt− + f (amt bmt Utm φt− ) − f (at b¯ t ζt φt− ) where def
Utm = U(φt− + φt− (amt )) − U(φt− )
REPUTATION IN CONTINUOUS-TIME GAMES
873
and for each (a b ζ φ) ∈ A × B × [0 g¯ − g] × [0 1], ¯ def
f (a b ζ φ) = rg(a b) + rζλ(a) + φ(1 − φ)U (φ)λ(a) CLAIM 5: There exists ε > 0 such that, so long as Dt ≥ D0 /2, either the drift of Dt is greater than or equal to rD0 /4 or Dt jumps up by more than ε upon the arrival of a Poisson event. To prove this claim, note that by Lemma F.1, if rζt − ΔUtm ≤ 0, then amt ≥ at and, hence, bmt ≥ bt since the small players’ best reply in M is increasing in the large player’s action, as shown in Step 1 of the proof of Lemma F.1. Therefore, rζt − Utm ≤ 0 implies f (amt bmt Utm φt− ) − f (at b¯ t ζt φt− ) m ≥rg(at bm t )+Ut λ(at )
= rg(amt bmt ) + Utm λ(amt ) − rg(at b¯ t ) − rζt λ(at ) + φt− (1 − φt− )U (φt− )(λ(amt ) − λ(at )) ≥0 by λ >0
≥ rg(at bmt ) − rg(at b¯ t ) + (Utm − rζt )λ(at ) ≥0 by g2 ≥0
≥0
Thus, rζt − Utm ≤ 0 ⇒
f (amt bmt Utm φt− ) − f (at b¯ t ζt φt− ) ≥ 0
By a continuity/compactness argument similar to the one used in the proof of Lemma C.8, there exists ε > 0 such that for all t and after all public histories, rζt − Utm ≤ ε ⇒
f (amt bmt Utm φt− ) − f (at b¯ t ζt φt− ) ≥ −rD0 /4;
hence, so long as Dt ≥ D0 /2, then rζt − Utm ≤ ε ⇒
drift of Dt is greater than or equal to rD0 /4
Thus, so long as Dt ≥ D0 /2, if the drift of Dt is less than rD0 /4, then rζt − Utm > ε, which implies that at ≥ amt by Lemma F.1 and, hence, that Ut ≤
874
E. FAINGOLD AND Y. SANNIKOV
Utm since U is increasing; therefore, rζt − Ut , the jump in Dt upon the arrival of a Poisson event, must be greater than or equal to rζt − Utm > ε, as claimed. Since we have assumed that D0 > 0, the claim above readily implies that with positive probability, the process Dt = Wt − U(φt ) must eventually grow arbitrarily large, which is impossible since (Wt )t≥0 and (U(φt ))t≥0 are bounded processes. We have thus shown that no sequential equilibrium can yield a payoff greater than U(φ0 ). A similar argument proves that payoffs below U(φ0 ) also cannot be achieved in sequential equilibria. Step 2. First, let us show that the Markovian strategy profile (a(φt U) b(φt U))t≥0 is a sequential equilibrium profile that yields the normal type a payoff of U(φ0 ). Let (φt )t≥0 be the solution of dφt = −λφt− (a(φt− U))φt− (a(φt− U)) dt + φt− (a(φt− U)) dNt with initial condition φ0 . Thus, using the optimality equation, m dUtm = (rUt− − rg(amt bmt ) − λ(amt )Utm ) dt + Utm dNt
def
def
def
where (amt bmt ) = (a(φt− U) b(φt− U)), Utm = U(φt ), and Utm = U(φt− + φt− (amt )) − U(φt− ). Since (Utm )t≥0 is bounded and (amt bmt ) ∈ M(φt− Utm ), Theorem 9 implies that (amt bmt )t≥0 is a sequential equilibrium profile in which the normal type receives a payoff of U(φ0 ). It remains to show that (a(φt U) b(φt U))t≥0 is the unique sequential equilibrium. Indeed, if (at b¯ t )t≥0 is an arbitrary sequential equilibrium, then the associated belief–continuation value pair (φt Wt ) must stay in the graph of U, by Step 1. Then, by (82) and (84), we must have (at b¯ t ) ∈ M(φt− ζt ) a.e., where rζt = U(φt− + φt− (at )) − U(φt− ). Therefore, by Proposition 9, (at b¯ t ) = (a(φt− U) b(φt− U)) a.e., as was to be shown. Q.E.D. REFERENCES ABREU, D., P. MILGROM, AND D. PEARCE (1991): “Information and Timing in Repeated Partnerships,” Econometrica, 59, 1713–1733. [776,777,793,820,821,823,824,826,827] ABREU, D., D. PEARCE, AND E. STACCHETTI (1990): “Toward a Theory of Discounted Repeated Games With Imperfect Monitoring,” Econometrica, 58, 1041–1063. [775] AUBIN, J. P., AND A. CELLINA (1984): Differential Inclusions. Berlin: Springer. [777] BARRO, R. J. (1986): “Reputation in a Model of Monetary Policy With Incomplete Information,” Journal of Monetary Economics, 17, 3–20. [773] BILLINGSLEY, P. (1999): Convergence of Probability Measures (Second Ed.). New York: Wiley. [794] BOARD, S., AND M. MEYER-TER-VEHN (2010): “Reputation for Quality,” Report, UCLA. [821] BOLTON, P., AND C. HARRIS (1999): “Strategic Experimentation,” Econometrica, 67, 349–374. [787] BRÉMAUD, P. (1981): Point Processes and Queues: Martingale Dynamics (First Ed.). New York: Springer-Verlag. [821,822] CELENTANI, M., AND W. PESENDORFER (1996): “Reputation in Dynamic Games,” Journal of Economic Theory, 70, 109–132. [773]
REPUTATION IN CONTINUOUS-TIME GAMES
875
CHARI, V. V., AND P. J. KEHOE (1993), “Sustainable Plans and Debt,” Journal of Economic Theory, 61, 230–261. [773] COLE, H. L., J. DOW, AND W. B. ENGLISH (1995): “Default, Settlement, and Signalling: Lending Resumption in a Reputational Model of Sovereign Debt,” International Economic Review, 36, 365–385. [773] CRANDALL, M. G., H. ISHII, AND P.-L. LIONS (1992): “User’s Guide to Viscosity Solutions of Second Order Differential Equations,” Bulletin of the American Mathematical Society, 27, 1–67. [811,812] CRIPPS, M., G. J. MAILATH, AND L. SAMUELSON (2004): “Imperfect Monitoring and Impermanent Reputations,” Econometrica, 72, 407–432. [774,777,785,797,806,817,819,820] CUKIERMAN, A., AND A. MELTZER (1986): “A Theory of Ambiguity, Credibility and Inflation Under Discretion and Asymmetric Information,” Econometrica, 54, 1099–1128. [773,804] DE COSTER, C., AND P. HABETS (2006): Two-Point Boundary Value Problems: Lower and Upper Solutions. Mathematics in Science and Engineering, Vol. 205 (First Ed.). Amsterdam: Elsevier. [830] DIAMOND, D. W. (1989): “Reputation Acquisition in Debt Markets,” Journal of Political Economy, 97, 828–862. [773] ELY, J. C., AND J. VALIMAKI (2003): “Bad Reputation,” Quarterly Journal of Economics, 118, 785–814. [773,814,816] ELY, J. C., D. FUDENBERG, AND D. K. LEVINE (2008): “When Is Reputation Bad?” Games and Economic Behavior, 63, 498–526. [814] FAINGOLD, E. (2008): “Building a Reputation Under Frequent Decisions,” Report, Yale University. [777,779,784,785,802,806,827] FAINGOLD, E., AND Y. SANNIKOV (2011): “Supplement to ‘Reputation in ContinuousTime Games’: Public Randomization,” Econometrica Supplemental Material, 79, http://www. econometricsociety.org/ecta/Supmat/7377_extensions.pdf. [817] FUDENBERG, D., AND D. K. LEVINE (1992): “Maintaining a Reputation When Strategies Are Imperfectly Observed,” Review of Economic Studies, 59, 561–579. [774,776,777,779,784-786,802, 806] (1994): “Efficiency and Observability With Long-Run and Short-Run Players,” Journal of Economic Theory, 62, 103–135. [776,779,784,793] (2007): “Continuous-Time Models of Repeated Games With Imperfect Public Monitoring,” Review of Economic Dynamics, 10, 173–192. [776,777,793,794,827] (2009): “Repeated Games With Frequent Signals,” Quarterly Journal of Economics, 124, 233–265. [777,794,827] FUDENBERG, D., D. K. LEVINE, AND E. MASKIN (1994): “The Folk Theorem With Imperfect Public Information,” Econometrica, 62, 997–1039. [821] HALE, J., AND S. M. V. LUNEL (1993): Introduction to Functional Diffential Equations (First Ed.). New York: Springer-Verlag. [863] HARRIS, C., P. RENY, AND A. ROBSON (1995): “The Existence of Subgame-Perfect Equilibrium in Continuous Games With Almost Perfect Information: A Case for Public Randomization,” Econometrica, 63, 507–544. [816,817] HOLMSTROM, B., AND P. MILGROM (1991): “Multitask Principal–Agent Analyses: Incentive Contracts, Ownership and Job Design,” Journal of Law, Economics and Organization, 7, 24–52. [802] KARATZAS, I., AND S. E. SHREVE (1991): Brownian Motion and Stochastic Calculus (Second Ed.). New York: Springer. [781,850,852] KELLER, G., AND S. RADY (1999): “Optimal Experimentation in a Changing Environment,” Review of Economic Studies, 66, 475–507. [787] KLEIN, B., AND K. B. LEFFLER (1981): “The Role of Market Forces in Assuring Contractual Performance,” Journal of Political Economy, 89, 615–641. [773] KREPS, D., AND R. WILSON (1982): “Reputation and Imperfect Information,” Journal of Economic Theory, 27, 253–279. [773,774]
876
E. FAINGOLD AND Y. SANNIKOV
KYDLAND, F., AND E. PRESCOTT (1977): “Rules Rather Than Discretion: The Inconsistency of Optimal Plans,” Journal of Political Economy, 85, 473–492. [805] LIPTSER, R. S., AND A. N. SHIRYAEV (1977): Statistics of Random Processes: General Theory, Vol. I. New York: Springer-Verlag. [787] LIU, Q., AND A. SKRZYPACZ (2010), “Limited Records and Reputation,” Report, University of Pennsylvania. [809] MAILATH, G. J., AND L. SAMUELSON (2006): Repeated Games and Reputations: Long-Run Relationships (First Ed.). Berlin: Oxford University Press. [801] MASKIN, E., AND J. TIROLE (2001): “Markov Perfect Equilibrium: I. Observable Actions,” Journal of Economic Theory, 100, 191–219. [775] MILGROM, P., AND J. ROBERTS (1982): “Predation, Reputation and Entry Deterrence,” Journal of Economic Theory, 27 280–312. [773,774] MOSCARINI, G., AND L. SMITH (2001): “The Optimal Level of Experimentation,” Econometrica, 69, 1629–1644. [787] SANNIKOV, Y. (2007): “Games With Imperfectly Observable Actions in Continuous Time,” Econometrica, 75, 1285–1329. [775,786] (2008): “A Continuous-Time Version of the Principal–Agent Problem,” Review of Economic Studies, 75, 957–984. [775] SANNIKOV, Y., AND SKRZYPACZ, A. (2007): “Impossibility of Collusion Under Imperfect Monitoring With Flexible Production,” American Economic Review, 97, 1794–1823. [776,793,801, 827] (2010): “The Role of Information in Repeated Games With Frequent Actions,” Econometrica, 78, 847–882. [776,777,821,827]
Dept. of Economics, Yale University, Box 208281, New Haven, CT 06520-8281, U.S.A.;
[email protected] and Dept. of Economics, Princeton University, Princeton, NJ 08544-1021, U.S.A.;
[email protected]. Manuscript received August, 2007; final revision received October, 2010.
Econometrica, Vol. 79, No. 3 (May, 2011), 877–892
WEAKLY BELIEF-FREE EQUILIBRIA IN REPEATED GAMES WITH PRIVATE MONITORING BY KANDORI, MICHIHIRO1 Repeated games with imperfect private monitoring have a wide range of applications, but a complete characterization of all equilibria in this class of games has yet to be obtained. The existing literature has identified a relatively tractable subset of equilibria. The present paper introduces the notion of weakly belief-free equilibria for repeated games with imperfect private monitoring. This is a tractable class which subsumes, as a special case, a major part of the existing literature (the belief-free equilibria). It is shown that this class can outperform the equilibria identified by the previous work. KEYWORDS: Repeated games, private monitoring, belief-free equilibrium, recursive structure.
1. INTRODUCTION THE PRESENT PAPER demonstrates a new way to construct equilibria in repeated games with imperfect private monitoring, which can outperform the equilibria identified by the previous literature. Specifically, I generalize the notion of belief-free equilibria (Ely and Valimaki (2002) and Ely, Horner, and Olszewski (2005), EHO hereafter), which has played a major role in the existing literature, and show that the resulting weakly belief-free equilibria continue to possess a nice recursive structure. I then apply this concept to a repeated prisoner’s dilemma game with private monitoring and construct a simple equilibrium which outperforms the equilibria identified by previous work. The superior performance is due to the fact that the equilibrium partially embodies the essential mechanism to achieve efficiency in repeated games with imperfect monitoring (the transfer of continuation payoffs across players, as in Fudenberg, Levine, and Maskin (1994)). In addition, the equilibrium is in very simple pure strategies and it is robust in the sense that players’ actions are always strict best replies. This is in contrast to belief-free equilibria, which rely on judiciously chosen mixed strategies and provide only weak incentive to follow the equilibrium actions. A repeated game is said to have (imperfect) private monitoring if agents’ actions are not directly observable and each agent receives imperfect private information (a private signal) about the opponents’ actions. This class of games has a number of important potential applications, but a complete characterization of equilibrium payoffs has yet to be obtained. This stands in sharp contrast to the case where players share the same information (repeated games with perfect or imperfect public monitoring), where the set of equilibria is 1 I am grateful to a co-editor and anonymous referees for helpful comments. I also thank Arthur J. Chiang for detailed comments and editing assistance. This research was partially supported by MEXT of the Japanese government, Grant-in-Aid for Scientific Research (c) 21530165.
© 2011 The Econometric Society
DOI: 10.3982/ECTA8480
878
KANDORI, MICHIHIRO
fully characterized and efficient outcomes can be sustained under a mild set of conditions (the folk theorems). The main difficulty in the private monitoring case comes from the fact that each player has to draw statistical inferences about the history of the opponents’ private signals. The inferences quickly become complicated over time, even if players adopt relatively simple strategies. To deal with this difficulty, the majority of existing literature has focused on belief-free equilibria, where players do not have to draw any statistical inferences. Let us denote player i’s action and private signal in period t by ai (t) and ωi (t). Note that, in general, player i’s continuation strategy at time t + 1 is determined by his private history hti = (ai (1) ωi (1) ai (t) ωi (t)). The belief-free equilibrium has the property that player i’s continuation strategy is a best reply to the opponents’ continuation strategies for any realization of opponents’ histories, ht−i = (a−i (1) ω−i (1) a−i (t) ω−i (t)), thereby making player i’s belief over ht−i irrelevant. The core of this approach was provided by the influential works of Piccione (2002), Obara (1999), and Ely and Valimaki (2002). This idea was later substantially generalized by Matsushima (2004), EHO, Horner and Olszewski (2006), and Yamamoto (2007). EHO showed that the set of belief-free equilibria can be characterized by a simple recursive method similar to that of Abreu, Pearce, and Stacchetti (1990). In the present paper, I propose a weakening of the belief-free conditions, leading to a set of equilibria which are still tractable and are capable of sustaining a larger payoff set. Note that the belief-free conditions imply that, at the beginning of period t + 1, player i does not have to form beliefs over ht−i = (a−i (1) ω−i (1) a−i (t) ω−i (t)). In contrast, I require that player i does not need to form beliefs over (a−i (1) ω−i (1) a−i (t)), omitting the last piece of information ω−i (t) from the belief-free requirement. This says that player i does not have to know the opponents’ histories up to the previous actions. However, player i does need to understand correctly that, for each possible action profile a(t), the private signals in the previous period are distributed according to the given monitoring structure p(ω(t)|a(t)).2 I call equilibria with this property weakly belief-free equilibria. I show that weakly belief-free equilibria have a recursive structure, and to this end I introduce the notion of reduced games. A reduced game payoff is equal to a current payoff (for an arbitrary action profile) plus the future equilibrium continuation payoff. When players use one-period memory strategies, current actions fully specify the continuation strategies, so that the reduced game payoff to player i at time t is represented as a simple function uti (a), where a is the action profile at time t. In this case, the weakly belief-free equilibria can be characterized by a simple property that players always play a correlated equilibrium of the reduced game after any history. In general, when 2 p(ω(t)|a(t)) represents the joint distribution of private signals (ω1 (t) ωN (t)) = ω(t) given action profile (a1 (t) aN (t)) = a(t).
EQUILIBRIA IN REPEATED GAMES
879
strategies do not necessarily have one-period memories, players’ continuation strategies depend on the past history as well as the current action. Let θi be a state variable which summarizes player i’s private history. In the general case, a reduced game payoff to player i is represented as vit (a|θ1 θN ), and the weakly belief-free equilibria are characterized by the property that players always play a Bayesian correlated equilibrium of the reduced game. 2. THE MODEL Let Ai be the (finite) set of actions of the stage game for player i = 1 N and define A = A1 × · · · × AN . Each player i observes her own action ai and private signal ωi ∈ Ωi . We denote ω = (ω1 ωN ) ∈ Ω = Ω1 × · · · × ΩN and let p(ω|a) be the probability of private signal profile ω given action profile a (we assume that Ω is a finite set). It is also assumed that no player can infer which actions were taken (or not taken) for sure; that is, I suppose that given any a ∈ A, each ωi ∈ Ωi occurs with positive probability. We denote the marginal distribution of ωi by pi (ωi |a). Player i’s realized payoff is determined by her own action and signal, and denoted πi (ai ωi ). Her expected payoff is given by πi (ai ωi )p(ω|a) gi (a) = ω∈Ω
The stage game is played repeatedly over an infinite time horizon t = 1 and each player i’s average discounted payoff is given by (1 − δ) × 2 ∞ t−1 where δ ∈ (0 1) is the discount factor and a(t) ∈ A is the t=1 gi (a(t))δ action profile at time t. A mixed action for player i is denoted by αi ∈ Δ(Ai ), where Δ(Ai ) is the set of probability distributions over Ai . With an abuse of notation, we denote the expected payoff and signal distribution under a mixed action profile α = (α1 αN ) by gi (α) and p(ω|α), respectively. A private history for player i up to time t is the record of player i’s past actions and signals, hti = (ai (1) ωi (1) ai (t) ωi (t)) ∈ Hit ≡ (Ai × Ωi )t . To determine the initial action of each player, we introduce a dummy initial history (or null history) h0i and let Hi0 be a singleton set {h0i }. A pure strategy si for player i is a function specifying an action after any history: formally, si : Hi → Ai , where Hi = t≥0 Hit . Similarly, a (behaviorally) mixed strategy for player i is denoted by σi : Hi → Δ(Ai ). A continuation strategy for player i after private history hti is denoted by σi [hti ], defined as (i) σi [hti ](h0i ) = σi (hti ) and (ii) for any other history hi = h0i , σi [hti ](hi ) = σi (hti hi ), where hti hi represents a history obtained by attaching hi after hti . For any given strategy profile σ = (σ1 σN ) and any private history profile ht = (ht1 htN ), let BRi (σ−i [ht−i ]) be the set of best-reply strategies for player i against σ−i [ht−i ]. EHO defined a belief-free strategy profile as follows.
880
KANDORI, MICHIHIRO
DEFINITION 1: A strategy profile σ is belief-free if for any ht and i, σi [hti ] ∈ BRi (σ−i [ht−i ]). Now I relax this belief-free condition in the following way. Fix any strategy profile σ and history profile ht = (a(1) ω(1) a(t) ω(t)). At the end of period t, what would player i’s belief over the opponents’ continuation strategies be if he knew the opponents’ private histories up to the actions in the previous period (a−i (1) ω−i (1) a−i (t))? This is given by the probability mixture of continuation strategy profiles of the opponents, σ−i [a−i (1) ω−i (1) a−i (t) ω−i (t)] for ω−i (t) ∈ Ω−i each of which is chosen with conditional probability p−i (ω−i (t)|a(t) ωi (t)). Let us denote the probability distribution thus defined over the opponents’ continuation strategies by σ −i [a−i (1) ω−i (1) a−i (t)|hti ]. DEFINITION 2: A strategy profile σ is weakly belief-free if for any ht = (a(1) ω(1) a(t) ω(t)) and i, σi [hti ] ∈ BRi (σ −i [a−i (1) ω−i (1) a−i (t)| hti ]). This definition says that, under a weakly belief-free strategy profile, player i in period t + 1 does not have to know the opponents’ histories up to the previous actions (a−i (1) ω−i (1) a−i (t)) to calculate his optimal continuation strategy. He may, however, need to understand correctly that for each possible action profile a(t), the private signals in the previous period are distributed according to p(ω(t)|a(t)). In the subsequent sections, I characterize the set of weakly belief-free equilibria. 3. ONE-PERIOD MEMORY In this section, let us consider weakly belief-free equilibria with one-period memory.3 This is a particularly tractable class which subsumes a major segment of the belief-free equilibria identified by Ely and Valimaki (2002) and EHO as a special case. The Appendix considers fully general strategies. We say that player i’s strategy has one-period memory if the current (mixed) action αi (t) depends only on ai (t − 1) and ωi (t − 1). We denote the probability of ai (t + 1) given ai (t) and ωi (t) by mti (ai (t + 1)|ai (t) ωi (t)) and call mti a one-period memory transition rule. 3 Deviations to general strategies (not necessarily with one-period memory) are allowed, so that we are not weakening the usual equilibrium conditions.
EQUILIBRIA IN REPEATED GAMES
881
Under a one-period memory strategy profile, at each moment t, the current action profile a(t) determines the continuation play (independent of previous history). Hence, we can define uti (a(t)) as the (average) expected continuation payoff to player i. Let us call the game defined by (uti Ai )i=1N a reduced game. This enables us to view a repeated game as a sequence of reduced games u1i u2i and below I analyze its recursive structure. This is in contrast to the previous literature (Abreu, Pearce, and Stacchetti (1990) and EHO), which views a repeated game as a sequence of continuation payoff sets and exploits its associated recursive structure.4 Before stating my characterization, it is necessary to define a couple of concepts. I say that a probability distribution q on A × Ω is a correlated equilibrium of game u : A → RN when (1) ui (a)q(a ω) ≥ ui (ai a−i )q(a ω) ∀i ∀ai ∀ωi ∀ai a−i ω−i
a−i ω−i
This differs from the standard definition of correlated equilibria, which is stated in terms of a probability distribution over A (only). To characterize weakly belief-free equilibria, we need to consider situations where each player receives a recommended equilibrium action and some additional information ωi (her private signal in the previous period). Condition (1) thus ensures that player i has an incentive to follow the recommended action ai under the presence of additional information ωi . The set of correlated equilibria of game u is denoted by (2)
C(u) ≡ {q ∈ Δ(A × Ω)|condition (1) holds}
where Δ(A × Ω) is the set of probability distribution over A × Ω. A standard result for the set of correlated equilibria carries over to our formulation: C(u) is nonempty and convex. Under a profile of one-period memory transition rules m = (m1 mN ), the probability of (a(t + 1) ω(t)) given a(t) is given by (3)
q (a(t + 1) ω(t)|a(t)) ≡ m
N
mi (ai (t + 1)|ai (t) ωi (t))p(ω(t)|a(t))
i=1
Its marginal distribution is denoted by pm (a(t + 1)|a(t)) ≡ (4) qm (a(t + 1) ω(t)|a(t)) ω(t)∈Ω
4 The concept of reduced games is not new. It is basically the same as the function E in Abreu, Pearce, and Stacchetti (1990) in the context of public monitoring of repeated games, and Mailath and Samuelson’s (2006) textbook employs the same concept for repeated games with any monitoring structure. The contribution of the present paper is to show that this concept is particularly useful in analyzing a certain class of equilibria in repeated games with private monitoring.
882
KANDORI, MICHIHIRO
I am now ready to introduce my equilibrium conditions. DEFINITION 3: A set of reduced games U ⊂ {u | u : A → RN } is selfgenerating if, for any u ∈ U, there exist v ∈ U and a one-period memory transition rule profile m such that ∀a u(a) = (1 − δ)g(a) + δ (5) v(a )pm (a |a) a ∈A
and (6)
∀a
qm (· ·|a) ∈ C(v)
where C(u), qm , and pm are defined by (2), (3), and (4). This definition can be interpreted as follows. Equation (5) shows that a given reduced game payoff profile u can be decomposed into a current payoff profile g and the continuation payoff profile v. Condition (6) is the key requirement, which says that players are always playing a correlated equilibrium of the continuation reduced game v after any action profile is played today.5 Now let us say that the reduced game u is generated by (continuation) reduced game v if conditions (5) and (6) are satisfied. The definition above says that a set of reduced games U is self-generating if any reduced game in this set is generated by another (continuation) reduced game in the same set. Note that any weakly belief-free equilibrium with one-period memory is associated with a self-generating set U = {u1 u2 }, where ut is the reduced game in period t, which is generated by ut+1 . Now we show that the weakly belief-free equilibrium payoffs with one-period memory can be characterized by the Nash equilibria associated with a selfgenerating set of reduced games. Let N(u) be the Nash equilibrium payoff set associated with game u. Then one obtains the following complete characterization of one-period memory belief-free equilibria, which is similar to Abreu, Pearce, and Stacchetti (1990). Note, however, that the present recursive characterization is given in terms of reduced games, in contrast to continuation payoff sets in Abreu, Pearce, and Stacchetti. THEOREM 1: Let U ⊂ {u | u : A → RN } be self-generating and bounded in the sense that there exists K > 0 such that |ui (a)| < K for all i, u ∈ U and a. Then, any point in N(U) ≡ N(u) u∈U 5 Our formal definition of weakly belief-free equilibrium requires that qm (· ·|a) ∈ C(v) is satisfied for any a. In our model, however, player i always believes that the opponents have never deviated. Hence, to obtain an equilibrium, it is enough to require qm (· ·|a) ∈ C(v) for (i) any ai and (ii) any a−i that is played with a positive probability on the equilibrium path.
EQUILIBRIA IN REPEATED GAMES
883
can be achieved as the average payoff of a one-period memory weakly belief-free sequential equilibrium. The set of all one-period memory weakly belief-free sequential equilibrium payoff profiles is given by N(U ∗ ), where U ∗ is the largest (in the sense of set inclusion) bounded self-generating set. A couple of remarks are in order before presenting the proof. First, a weakly belief-free equilibrium always exists, because the repetition of the stage game Nash equilibrium is a weakly belief-free equilibrium. Second, the proof below shows that the largest bounded self-generating set U ∗ is well defined. PROOF OF THEOREM 1: For any u ∈ U, repeated application of (5) induces a sequence of reduced games {ut } and one-period memory strategies {mt } that satisfy t ut+1 (a )pm (a |a) ∀a ut (a) = (1 − δ)g(a) + δ a ∈A
and (7)
∀a
t
qm (· ·|a) ∈ C(ut+1 )
for t = 1 2 with u1 = u. Hence, for any T (> 2), we have
T −1 t−1 T T −1 g(a(t))δ + u (a(t + 1))δ a u(a) = (1 − δ) g(a) + E t=2
The expectation E[·|a] presumes that the distribution of a(t + 1) given a(t) t is pm (a(t + 1)|a(t)) with a(1) = a. As uT is bounded, we can take the limit T → ∞ to get
∞ t−1 (8) u(a) = (1 − δ) g(a) + E g(a(t))δ a t=2
Hence u(a) can be interpreted as the average payoff profile when the players choose a today and follow one-period memory strategy profile mt , t = 1 2 Let α be a (possibly mixed) Nash equilibrium of game u, and let σ be the strategy where α is played in the first period and the players follow mt , t = 1 2 By construction, σ achieves an average payoff of u(α) (the expected payoff associated with α), and we show below that it is a sequential equilibrium because after any history no player can gain from a one-shot unilateral deviation.6 In 6 The standard dynamic programming result shows that this implies that no (possibly infinite) sequence of unilateral deviations is profitable.
884
KANDORI, MICHIHIRO
the first period, no one can gain by a one-shot unilateral deviation from α because it is a Nash equilibrium of reduced game u. For stage t > 1, take any player i and any private history for her (a0i (1) a0i (t − 1) ω0i (1) ω0i (t − 1)). Let μ(a(t − 1)) be her belief about last period’s action profile given her private history. Then her belief about the current signal distribution is t−1 q(a(t) ω(t)) = qm (a(t) ω(t)|a(t − 1))μ(a(t − 1)) a(t−1)∈A
(Note that under σ, other players’ continuation strategies do not depend on their private histories except for their current signals.) Let v ∈ U be the continuation payoff in stage t (including stage t’s payoff). Then condition (7) for self-generation, qm (· ·|a) ∈ C(v) for all a, and the convexity of the correlated equilibrium set C(v) imply q ∈ C(v). This means that player i cannot gain by one-shot unilateral deviation at this stage. Conversely, given any one-period memory weakly belief-free sequential equilibrium, one can calculate a sequence of reduced games ut t = 1 2 It is straightforward to check that U ≡ {ut | t = 1 2 } is a self-generating set bounded by K ∗ ≡ maxia |gi (a)|. Since a union of self-generating sets bounded by K ∗ is also self-generating and bounded by K ∗ , we conclude that the set of all one-period memory weakly belief-free sequential equilibrium payoff pro∗ ∗ files is given by N(U K ), where U K is the largest (in the sense of set inclusion) self-generating set bounded by K ∗ . Now consider any self-generating set U which is bounded (not necessarily by K ∗ ). The first part of this proof shows that U is actually bounded by K ∗ (as any u ∈ U is an average payoff ∗ profile of the repeated game). This implies U K = U ∗ , which completes the proof. Q.E.D. 4. AN EXAMPLE: THE CHICKEN GAME IN THE REPEATED PRISONER’S DILEMMA In this section, I present a simple example of a one-period memory weakly belief-free equilibrium, where the set U in our characterization (Definition 3) is a singleton. This example shows that a weakly belief-free equilibrium can have the following desirable properties: (i) it can be in very simple pure strategies, (ii) players always have a strict incentive to follow the equilibrium action, and (iii) it can outperform the equilibria identified by previous work. The equilibrium in this example also has an interesting property that it “embeds” the chicken game (as the reduced game) in a repeated prisoner’s dilemma game. The stage game has the following prisoner’s dilemma structure:
C D
C 1 1 3/2 −1/6
D −1/6 3/2 0 0
885
EQUILIBRIA IN REPEATED GAMES
Each player’s private signal has binary outcomes ωi = G B, i = 1 2. The relationship between current actions and signals (the monitoring structure) is ω1 \ω2 G (C C) ⇒ B
G 1/3 1/3
B 1/3 0
ω1 \ω2 G (D C) ⇒ B
G 1/8 1/4
B 1/2 1/8
ω1 \ω2 G (C D) ⇒ B
G 1/8 1/2
B 1/4 1/8
ω1 \ω2 G (D D) ⇒ B
G 0 2/5
B 2/5 1/5
This set of distributions admits the following natural interpretation. When both players cooperate, they can avoid a mutually bad outcome (B B). If one player defects, with a high probability (1/2), the defecting player enjoys a good outcome (G), while the other player receives a bad one (B). Finally, when both players defect, they cannot achieve a mutually good outcome (G G). I have made some entries in the above tables equal to 0 to simplify the analysis, but as I will show at the end of this section, the main results continue to hold even if we replace 0 with a small positive number. Let us consider the following simple (and intuitive) one-period memory transition rule: C if ωi (t − 1) = G (9) ai (t) = D if ωi (t − 1) = B In what follows, I show that this simple strategy constitutes a weakly belieffree equilibrium and that it outperforms all belief-free equilibria.7 The reduced game payoff for profile a, denoted ui (a), is defined to be the average payoff when a is played in the initial period and then players follow the above strategy. Since the transition rule is time-independent, the reduced game is also timeindependent. Let us denote the reduced game payoffs by
C D
C x x β α
D α β y y
For example, (u1 (C D) u2 (C D)) = (α β). Since the same reduced game u is played in each period, the dynamic programming value equation in the self7 Phelan and Skrzypacz (2009) developed a quite general computational method to check the equilibrium conditions under private monitoring when strategies are represented by finite state automata. The present example shows that it is possible to satisfy their equilibrium conditions in a simple way.
886
KANDORI, MICHIHIRO
generation condition (5) reduces to ∀i ∀a
ui (a) = (1 − δ)gi (a) + δ
ui (a )pm (a |a)
a ∈A
where pm (a |a) denotes the transition probability of current and subsequent actions under our strategy (9) and the given monitoring structures. This provides a system of equations to determine the reduced game payoffs 1 x = (1 − δ) + δ (x + α + β) 3
2 1 y = δ y + (α + β) 5 5
1 1 1 1 1 α = (1 − δ) − +δ x+ α+ β+ y 6 8 4 2 8
3 1 1 1 1 β = (1 − δ) + δ x + α + β + y 2 8 2 4 8 Figure 1 shows the reduced game payoffs for various values of discount factor. By definition, the reduced game coincides with the prisoner’s dilemma game (i.e., the stage game) when δ = 0. Numerical computation shows that when δ > 4/7, the slope of the edge connecting u(D D) and u(D C) becomes positive (I will provide some intuition for why this is the case shortly), and this implies that player 2 has no incentive to deviate from (D C). In other words, when δ > 4/7, the reduced game becomes a chicken game, which has two pure strategy Nash equilibria (D C) and (C D). Before going into the details, let me provide some intuition about how the equilibrium in this example works. Note that our one-period memory strategy
FIGURE 1.—Reduced games: from outer to inner, δ = 0, δ = 4/7, δ = 4/5, and δ = 099.
EQUILIBRIA IN REPEATED GAMES
887
induces the following probability distributions on the current action profile: a(t − 1) = (C C) a1 (t)\a2 (t) C D C 1/3 1/3 D 1/3 0
a(t − 1) = (D C) a1 (t)\a2 (t) C D C 1/8 1/2 D 1/4 1/8
a(t − 1) = (C D) a1 (t)\a2 (t) C D C 1/8 1/4 D 1/2 1/8
a(t − 1) = (D D) a1 (t)\a2 (t) C D C 0 2/5 D 2/5 1/5
The above tables show that once (C D) is played, (D C) follows with a large probability, thereby punishing player 2, who initially played D. Additionally, note that player 1, who was cheated initially, benefits from the transition. Hence the equilibrium strategy here provides incentives by the transfer of continuation payoffs (taking away some continuation payoff from the deviator and giving it to the victim). As I will elaborate upon later, this is an essential mechanism to achieve efficiency in repeated games with imperfect monitoring (Fudenberg, Levine, and Maskin (1994)). After (D C) is played, the action profile largely goes back and forth between (C D) and (D C). As the discount factor increases, this provides a large impact on the average payoffs, and the reduced game payoff set, which is a prisoner’s dilemma game payoff set when δ = 0, is “compressed” in the northwest–southeast directions (see Figure 1). As a result, the slope of the edge connecting u(D D) and u(D C) in Figure 1 becomes positive, and the reduced game becomes a chicken game for a large δ. Since (C D) and (D C) are Nash equilibria of the chicken game, a joint distribution of actions which places relatively large probabilities on (C D) or (D C) can be a correlated equilibrium. All probability distributions in the above tables indeed have this property. Hence our strategy specifies a correlated equilibrium of the reduced game after any history and, therefore, it is a weakly belief-free equilibrium. Let us now examine the incentive constraint (6) in detail. In the general model in Section 3, I defined correlated equilibrium with respect to joint distributions over (a ω) (see (1)). In the equilibrium considered here, there is a one-to-one correspondence between a (= a(t)) and ω (= ω(t − 1)) (see (9)), and as a result I can apply the standard definition of correlated equilibrium (in terms of distributions of a alone). Simple computation shows that the joint distributions in the above tables are correlated equilibria of a reduced game if α−y 2 > β−x > 1 (and the incentive constraints are satisfied with strict inequality). This condition is indeed satisfied when δ > 098954. α−y > 1 continues to hold even Note that the (strict) incentive constraint 2 > β−x if we slightly perturb the stage game payoffs, the signal distributions, or the
888
KANDORI, MICHIHIRO
discount factor. Hence our weakly belief-free equilibrium identified above remains a (strict) equilibrium for all nearby games. This is in sharp contrast to the equilibria obtained by Ely and Valimaki (2002) or EHO, whose essential feature is that at least one player is indifferent between some actions. The mixing probability in a belief-free equilibrium has to be fine-tuned to the structure of the game. If the payoff, discount factor, or monitoring structure changes, the belief-free equilibrium strategy changes. Note also Bhaskar’s critique (2000) of belief-free equilibria: the mixed strategy employed by belief-free equilibria may not be justified by Harsanyi’s purification argument (with independent perturbations to the stage payoffs).8 The weakly belief-free equilibrium in this section is free from those problems. Finally, let me provide a welfare comparison between the weakly belief-free equilibrium and belief-free equilibria in this example. To this end, I employed EHO’s characterization (Proposition 5 in EHO) to compute an upper bound of all belief-free equilibrium payoffs in this example. The analysis revealed that any belief-free equilibrium payoff profile (v1 v2 ) must satisfy v1 + v2 ≤ 8/7, while all reduced game payoffs associated with our weakly belief-free equilibrium strategy lie above this upper bound v1 + v2 = 8/7 when δ > 098954 (the details can be found in the Supplemental Material (Kandori (2011))). The following paragraph summarizes our findings in this section. Summary When δ > 098954, playing (D C) or (C D) in the first period followed by C if ωi (t − 1) = G ai (t) = D if ωi (t − 1) = B is a weakly belief-free equilibrium of the repeated prisoner’s dilemma game defined in this section. Furthermore, the equilibrium payoff profile lies above the Pareto frontier of the belief-free equilibrium payoffs. 5. COMPARISON BETWEEN BELIEF-FREE AND WEAKLY BELIEF-FREE EQUILIBRIA
In this section I provide some comparisons between belief-free and weakly belief-free equilibria in terms of the repeated prisoner’s dilemma game in the 8 A follow-up paper by Bhaskar, Mailath, and Morris (2008) partially confirms this conjecture. They considered one-period memory belief-free strategies à la Ely–Valimaki in a perfect monitoring repeated prisoner’s dilemma game (note that the Ely–Valimaki belief-free equilibrium applies to perfect as well as imperfect private monitoring). They showed that those strategies cannot be purified by one-period memory strategies, but can be purified by infinite memory strategies. They conjectured that purification fails for any finite memory strategy (so that purification is possible, but only with substantially more complex strategies). They also conjectured that similar results hold for the imperfect private monitoring case.
EQUILIBRIA IN REPEATED GAMES
889
previous section. As we will see, the idea of a reduced game is useful in comparing the two concepts. Construction of Equilibria Ely and Valimaki (2002) judiciously chose one-period memory mixed strategies, so that the reduced game has the following very special structure:
C D
C R R R P
D P R P P
R > P.
Note that for any realization of opponent’s action, a player is always indifferent between C and D. This implies that any joint distribution of actions is a correlated equilibrium of this reduced game and, in particular, in their equilibrium, players are always playing a correlated equilibrium of this reduced game. One of the main messages of the present paper is that weakly belief-free equilibria accommodate a more general way to play a correlated equilibrium of the reduced game, which does not rely on the indifference conditions. Welfare Comparison Let us examine the best belief-free equilibrium payoffs, which start with initial action profile (C C). In the belief-free equilibrium, a player has an incentive to play C because a deviation to D induces the opponent to punish (to play D) with a large probability. Figure 2(a) shows the directions of punishment in the belief-free equilibrium. Note that the belief-free equilibrium payoff set is square-shaped, as the payoff table above indicates. The figure shows that when one player is punished, the other player’s payoff cannot be increased in the belief-free equilibrium. This implies that the total payoff of the players is reduced and, as a result, the belief-free equilibrium suffers from
FIGURE 2.—Reduced game payoff set and the directions of punishment.
890
KANDORI, MICHIHIRO
a heavy welfare loss. On the other hand, if a player’s payoff is increased when the opponent is punished, the loss of total payoff is mitigated (and, if done correctly, the loss can completely vanish, as the Fudenberg–Levine–Maskin (1994) folk theorem shows). The weakly belief-free equilibrium in the previous section embodies such transfers of continuation payoffs (although not as perfectly as the Fudenberg–Levine–Maskin equilibria do) and, therefore, it does better than the belief-free equilibria. The major directions of punishment are shown in Figure 2(b). Independent Signals In the previous section, I constructed a strict pure strategy weakly belief-free equilibrium, but this requires that the private signals are correlated. In fact, when private signals are independent (i.e., p(ω1 ωN |a) = p1 (ω1 |a) · · · p(ωN |a)), it is well known that any strict pure strategy equilibrium is a repetition of the stage game Nash equilibria (this is implied by Matsushima’s (1991) result). Generally speaking, a weakly belief-free equilibria (which is not belieffree) is more likely to exist when signals are correlated. Recall that a weakly belief-free equilibrium requires that the joint distribution of a(t) and ω(t − 1) should be in a convex set C(ut ) (the set of correlated equilibria of the reduced game) after any history. When private signals are independent however, the joint distribution of a(t) has to be in a much smaller set, N(ut ) (the set of Nash equilibria of the reduced game). Note that N(ut ) has only a finite number of elements (unless there are some ties in the reduced game payoffs as in belieffree equilibria). Hence the weakly belief-free equilibrium condition is easier to satisfy when private signals are correlated, as in the example in Section 4. APPENDIX: GENERAL STRATEGIES In this section, I consider weakly belief-free equilibria in general strategies (i.e., strategies which do not necessarily have one-period memories). For each player i, specify (i) a set of (finite or countably many) states Θi , (ii) an initial state θi (1) ∈ Θi , (iii) (mixed) action choice for each state, ρi : Θi → Δ(Ai ), and (iv) state transition τi : Θi × Ai × Ωi → Δ(Θi ). This determines the probability distribution of the next state θi (t + 1) based on the current state θi (t), current action ai (t), and current private signal ωi (t). I call msi ≡ (Θi θi (1) ρi τi ) a machine strategy. All strategies can trivially be represented as a machine strategy when we set Θi equal to the set of all histories for player i: Θi = Hi . The action choice and transition rule are assumed to be time-independent, but this is without loss of generality. We can always include the current time in the state variable θi . Under a machine strategy profile ms = (msi msN ), we can compute the continuation payoff to player i, when (i) all players’ continuation strategies are
EQUILIBRIA IN REPEATED GAMES
891
specified by ms given θ(t) and (ii) the current action profile is a(t). Denote this by vi (a(t)|θ(t)) and let us call it an ex post reduced game. To state the incentive constraints for weakly belief-free equilibria, we need the following definition. DEFINITION 4: Probability distribution r over Ω × Θ × A is a Bayesian correlated equilibrium of ex post reduced game v when (10)
∀i ∀ai ∀ωi ∀θi ∀ai vi (a|θ)r(ω θ a) ≥ a−i ω−i θ−i
vi (ai a−i |θ)r(ω θ a)
a−i ω−i θ−i
The set of Bayesian correlated equilibria of the ex post reduced game v is denoted by (11)
BC(v) = {r ∈ Δ(Ω × Θ × A)|condition (10) holds}
Let qms (ω(t − 1) θ(t) a(t)|θ(t − 1) a(t − 1)) be the joint distribution of (ω(t − 1) θ(t) a(t)) given (θ(t − 1) a(t − 1)) under machine strategy profile ms, and let pms (θ(t) a(t)|θ(t − 1) a(t − 1)) be its marginal distribution. Now I am ready to state the characterization conditions. DEFINITION 5: An ex post reduced game vi (a|θ), i = 1 N, is selfgenerating if there exists a machine strategy profile ms (defined over states θ ∈ Θ) such that (12) vi (a |θ )pms (θ a |θ a) ∀i ∀a ∀θ vi (a|θ) = (1 − δ)gi (a) + δ a ∈A
and (13)
∀a ∀θ
qms (· · ·|θ a) ∈ BC(v)
Given an ex post reduced game v = v(a|θ), let N(v) be the set of Nash equilibrium payoff profiles of game g(a) = v(a|θ) for some θ. Suppose that v is self-generating and w ∈ N(v) is obtained as a Nash equilibrium of game g(a) = v(a|θ). Then w is obtained as a machine strategy equilibrium where the initial state is θ. Formally, we obtain the following characterization result. THEOREM 2: Let v be a self-generating ex post reduced game, which is bounded in the sense that there exists K > 0 such that |vi (a|θ)| < K for all i, a, and θ. Then any w ∈ N(v) is a weakly belief-free equilibrium payoff profile. Conversely, any weakly belief-free equilibrium payoff profile is an element of N(v) for some bounded self-generating ex post reduced game v. The proof is basically the same as in Section 3 and therefore is omitted.
892
KANDORI, MICHIHIRO REFERENCES
ABREU, D., D. PEARCE, AND E. STACCHETTI (1990): “Toward a Theory of Discounted Repeated Games With Imperfect Monitoring,” Econometrica, 58, 1041–1063. [878,881,882] BHASKAR, V. (2000): “The Robustness of Repeated Game Equilibria to Incomplete Payoff Information,” Mimeo, University of Essex. [888] BHASKAR, V., G. MAILATH, AND S. MORRIS (2008): “Purification in the Infinitely-Repeated Prisoner’s Dilemma,” Review of Economic Dynamics, 11, 515–528. [888] ELY, J. C., AND J. VALIMAKI (2002): “A Robust Folk Theorem for the Prisoner’s Dilemma,” Journal of Economic Theory, 102, 84–105. [877,878,880,888,889] ELY, J. C., J. HORNER, AND W. OLSZEWSKI (2005): “Belief-Free Equilibria in Repeated Games,” Econometrica, 73, 377–415. [877] FUDENBERG, D., D. K. LEVINE, AND E. MASKIN (1994): “The Folk Theorem With Imperfect Public Information,” Econometrica, 62, 997–1039. [877,887,890] HORNER, J., AND W. OLSZEWSKI (2006): “The Folk Theorem for Games With Private AlmostPerfect Monitoring,” Econometrica, 74, 1499–1544. [878] KANDORI, M. (2011): “Supplement to ‘Weakly Belief-Free Equilibria in Repeated Games With Private Monitoring’,” Econometrica Supplemental Material, 79, http://www.econometricsociety. org/ecta/Supmat/8480_proofs.pdf. [888] MAILATH, G., AND L. SAMUELSON (2006): Repeated Games and Reputations. Oxford: Oxford University Press. [881] MATSUSHIMA, H. (1991): “On the Theory of Repeated Games With Private Information, Part I: Anti-Folk Theorem Without Communication,” Economics Letters, 35, 253–256. [890] (2004): “Repeated Games With Private Monitoring: Two Players,” Econometrica, 72, 823–852. [878] PHELAN, C., AND A. SKRZYPACZ (2009): “Beliefs and Private Monitoring,” Mimeo, Stanford University. [885] OBARA, I. (1999): “Private Strategy and Efficiency: Repeated Partnership Game Revisited,” Unpublished Manuscript, University of Pennsylvania. [878] PICCIONE, M. (2002): “The Repeated Prisoner’s Dilemma With Imperfect Private Monitoring,” Journal of Economic Theory, 102, 70–83. [878] YAMAMOTO, Y. (2007): “Efficiency Results in N Player Games With Imperfect Private Monitoring,” Journal of Economic Theory, 135, 382–413. [878]
Faculty of Economics, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033, Japan;
[email protected]. Manuscript received March, 2009; final revision received October, 2010.
Econometrica, Vol. 79, No. 3 (May, 2011), 893–921
AN EXPERIMENTAL STUDY OF COLLECTIVE DELIBERATION BY JACOB K. GOEREE AND LEEAT YARIV1 We study the effects of deliberation on collective decisions. In a series of experiments, we vary groups’ preference distributions (between common and conflicting interests) and the institutions by which decisions are reached (simple majority, two-thirds majority, and unanimity). Without deliberation, different institutions generate significantly different outcomes, tracking the theoretical comparative statics. Deliberation, however, significantly diminishes institutional differences and uniformly improves efficiency. Furthermore, communication protocols exhibit an array of stable attributes: messages are public, consistently reveal private information, provide a good predictor for ultimate group choices, and follow particular (endogenous) sequencing. KEYWORDS: Jury decision-making, deliberative voting, strategic voting.
1. INTRODUCTION 1.1. Overview RANGING FROM JURY DECISIONS to political elections, situations in which groups of individuals determine a collective outcome are ubiquitous. There are two important observations that pertain to almost all collective processes observed in reality. First, decisions are commonly preceded by some form of communication among individual decision-makers (such as jury deliberations or election polls). Second, even when looking at a particular context, say U.S. civil jurisdiction, there is great variance in the type of institutions that are employed to aggregate private information into group decisions.2 The recent theoretical literature has tried to assess the potential impacts of communication on group decision processes, making strong assumptions on the format of conversation (e.g., Austen-Smith and Feddersen (2005, 2006), analyzing one-shot simultaneous communication, or Gerardi and Yariv (2007), allowing for general cheap talk). While experimental and field investigations of collective decisions progress hand in hand, there are several inherent difficulties germane to field data in the context of group deliberation. First, the prior inclinations of decision-makers, the accuracy of information, and so forth may 1 We thank a co-editor and three anonymous referees for very helpful comments. We also thank Gary Charness, Guillaume Frechette, Dino Gerardi, John Kagel, Alessandro Lizzeri, Tom Palfrey, and Lise Vesterlund for many useful conversations and suggestions. Lauren Feiler, Salvatore Nunnari, and Julian Romero provided us with superb research assistance. We gratefully acknowledge financial support from the National Science Foundation (SES 0551014) and the European Research Council (ERC Advanced Grant, ESEI-249433). 2 For example, in 30 state civil courts in the U.S., non-unanimous voting rules are employed that range from 2/3 majority to 7/8 majority and anything in between; see State Court Organization 1998, U.S. Department of Justice, Office of Justice Programs, available online at http://www.ojp. usdoj.gov/bjs/pub/pdf/sco98.pdf.
© 2011 The Econometric Society
DOI: 10.3982/ECTA8852
894
J. K. GOEREE AND L. YARIV
suffer from endogeneity problems as well as may be difficult to calibrate. Second, protocols of conversation are rarely obtainable. Indeed, the existing field analysis in the jury context uses either exit surveys or mock juries.3 Third, a controlled comparison of institutions is very difficult practically. Juries serve as a prime example in which communication is structured into the decisionmaking process. Even for particular types of cases, there is great institutional variance across state jurisdictions. Nonetheless, out-of-court settlements are not fully documented and may be affected by the voting rule in place, which makes for harsh empirical endogeneity problems (Priest and Klein (1984)). The current paper reports observations from some of the first lab experiments aimed at understanding the effects of different institutions on outcomes when communication channels are available as well as the impact of different preference distributions within a group on institutional performance. Furthermore, our design allows us to provide a characterization of the endogenous formation of communication protocols under different institutions and group preferences. Specifically, we conducted an array of experiments that emulate a jury decision-making process, in which groups of nine subjects were required to make a collective decision between one of two alternatives (a neutral version of acquittal or conviction). The returns to either alternative were randomly determined according to the realization of an underlying state (such as a guilty or innocent defendant) and each subject received a private signal about that realization (similar to the subjective interpretations of testimonies in a trial). We implemented a 3 × 3 × 2 design. Namely, we varied the distribution of preferences among subjects (one distribution entailing common interests and two allowing for different formats of heterogeneity), the institution or voting rule by which the group decision was made (simple majority, 2/3 supermajority, and unanimity), and the availability (or unavailability) of free-form communication. Our experimental setup can be thought off as a metaphor for a wide variety of settings, including not only jury voting, but also investment decisions by corporate strategy committees, hiring and tenure decisions by university faculty, performances rated by a group of judges, and more. There are several insights that come out of our investigation. First, without the ability to communicate, agents behave in a rather sophisticated strategic manner. Across treatments, agents vote against their private information when the informative equilibrium prescribes that they do so. While the experimental observations do not match the Bayesian Nash predictions pointwise numerically, the data do reveal the theoretically predicted comparative statics across voting rules and across preferences. One consequence of subjects’ strategic behavior is that, absent communication, the efficiency of simple majoritarian 3 For an overview of recent empirical research on deliberating juries, see Devine, Clayton, Dunford, Seying, and Pryce (2001).
EXPERIMENTAL STUDY OF COLLECTIVE DELIBERATION
895
rules is greater than that emerging from voting rules that require more consensual decisions (see, e.g., Feddersen and Pesendorfer (1998)). The second, and possibly most important insight, is that free-form communication greatly improves efficiency as well as diminishes institutional differences. The extent to which institutional differences are mitigated depends on the preference heterogeneity between individuals. In particular, when agents have shared (or homogeneous) preferences, as much of the extant strategic voting literature assumes (see below), there are no significant differences between outcomes under different voting rules when communication is available. Furthermore, groups make choices that are consistent with the welfare maximizing decisions given the available aggregate information in the group. These observations have important implications. On the one hand, they help explain the great variety of institutions in what appear to be very similar contexts (such as trials of a particular type). Indeed, when the panel of decisionmakers can freely deliberate prior to making a collective decision, the institution in and of itself may not be crucial to outcomes. On the other hand, these results suggest that from a policy perspective, affecting the communication protocols that precede decisions can serve as a vital design instrument. The third chief insight pertains to the characteristics of the endogenously created communication protocols. In our experiments, communication is predominantly public, nearly always truthful, and is a strong predictor of group choice. Correct decisions are associated with shorter chats and higher fractions of the conversations dedicated to information exchange. Furthermore, across all treatments, protocols are consistently composed of two distinct phases— information sharing and aggregation of opinions. In fact, a schematic description of the procedure subjects utilize is as follows. Subjects first share their information (truthfully and publicly), then decide collectively on the ultimate decision, and finally all vote for that option. Indeed, voting in unison is the modal outcome in almost all of our communication treatments. Naturally, this procedure explains the similarity in outcomes observed across voting rules when subjects deliberate. 1.2. Related Literature A formal approach to the study of collective decision-making under uncertainty originated with the work of Condorcet (1785), who considered group decision problems in which members have a common interest but differ in their beliefs about which alternative is correct. In particular, Condorcet considered a model with two possible states of the world (e.g., a defendant who is innocent or guilty) and individual group members, privately and imperfectly informed about which state applies, who vote for one of two alternatives (e.g., acquit or convict). The common interest assumption assures all group members readily agree about which alternative to pick if information is public (i.e., all share the same threshold of doubt for conviction). Differences in beliefs or preferences,
896
J. K. GOEREE AND L. YARIV
however, create an information aggregation problem, making it harder for the group to reach a consensus and draw the right conclusion. Within the context of this simple 2 × 2 model, generally referred to as the Condorcet jury model, Condorcet (1785) argued that majority is an efficient voting rule to aggregate the group’s scattered pieces of information. Furthermore, he concluded that under majority rule, groups make better decisions than individuals and large groups almost surely make the right choice. Condorcet derived this “jury theorem” assuming individuals vote sincerely, that is, their votes simply follow their private information. Recent work, however, has shown that rational voters do not necessarily behave this way (see Austen-Smith and Banks (1996), Myerson (1998), Feddersen and Pesendorfer (1996, 1997, 1998)). Since a vote matters only when it is pivotal, a strategic agent considers the information contained in the event of being pivotal, taking into account others’ strategies. In particular, Nash equilibrium strategies may involve strategic voting, where individuals go against their private information. Moreover, equilibrium strategies systematically vary with the voting rule. There are two sets of conclusions this literature has produced. First, unanimity is expected to perform worse than non-unanimous voting rules. In fact, under unanimity the probability of a wrongful conviction may increase with jury size and is bounded away from zero as the jury size grows large. Second, as jury size becomes infinitely large, non-unanimous voting rules fully aggregate the available information and generate efficient outcomes. The design of our experiments matches the theoretical setup of Feddersen and Pesendorfer (1998). In particular, our design allows us to test for strategic voting experimentally when communication is not available under different voting rules and different preference distributions. Recently, there have been several papers that analyzed the potential impact of communication on collective choice outcomes. Coughlan (2000) and Austen-Smith and Feddersen (2005, 2006) were among the first to point out that the availability of particular communication protocols4 can dramatically alter collective decisions, while Gerardi and Yariv (2007) showed that unrestricted communication (such as jury deliberation) renders a large class of voting rules equivalent in terms of the sets of sequential equilibrium outcomes they generate.5 It is the latter paper that motivates the design of the experimental sessions with communication. We allow for free-form communication, 4 Coughlan (2000) considered straw polls and Austen-Smith and Feddersen (2005, 2006) considered one-stage simultaneous and public conversation. See also Elster (1998) for related work in other fields. 5 Lizzeri and Yariv (2011) achieved a similar result for certain environments when considering communication protocols that entail a stage of costly information collection and a stage of collective decision. Gerardi and Yariv (2008) effectively considered communication protocols as a design instrument in a particular mechanism design setup pertaining to information acquisition within collective choice. Meirowitz (2006) considered a mechanism design problem that generates incentives for protocols to be carried out in a particular way.
EXPERIMENTAL STUDY OF COLLECTIVE DELIBERATION
897
study the emergent (endogenous) communication protocols, and compare the outcomes generated by different institutions. Experimentally, there have been several recent laboratory inquiries into group decision-making. Guarnaschelli, McKelvey, and Palfrey (2000) tested some of the extreme Nash predictions by inspecting a jury (of size three and six) and varying the voting rule (majority and unanimity). Their data confirm the Nash prediction that unanimity rule triggers strategic voting; jurors with an innocent signal mix between acquit and convict.6 In contrast, under majority rule, voting tends to be sincere. Battaglini, Palfrey, and Morton (2010) also identified strategic voting behavior in the form of the “swing-voter’s curse” (Feddersen and Pesendorfer (1996)). For an overview of political economy experiments, see Palfrey (2006). Communication is specifically incorporated in Dickson, Hafer, and Landa (2008), who studied the interpretation of information by subjects in a oneround protocol in which subjects (with potentially different preferences and private information) simultaneously decide whether to speak or to listen.7 As a summary of the extant literature, we note that the experiments described in this paper provide three important methodological innovations. Most importantly, our study constitutes a first experimental inquiry of how free-form communication affects institutional outcomes.8 In addition, we allow for intermediate voting rules in addition to majority and unanimity rules (intermediate voting rules are surprisingly understudied in the formal literature in view of their prevalence). Finally, our experimental treatments include juries with homogeneous and heterogeneous preferences. 1.3. Paper Structure Section 2 describes the experimental design. The corresponding theoretical predictions are analyzed in Section 3. We start the description of the experi6 Ladha, Miller, and Oppenheimer (1999) provided experimental evidence for strategic voting in a related setting. Bottom, Ladha, and Miller (2002) illustrated the implications of non-Bayesian updating in the Condorcet world. 7 McCubbins and Rodriguez (2006) considered a completely different setup with experimental communication. Their subjects need to decide on a solution to an SAT problem (of unknown difficulty) and they allow subjects (with unknown math abilities) to communicate in one round (they can send or not one signal, and listen or not to others’ signals). They showed that the quality of individual decisions can decrease after such communication. In another different context, Cooper and Kagel (2005) illustrated how team communication makes groups behave more strategically as well as respond quicker to payoff changes than individuals. The effects of communication have also been studied experimentally in other settings, for example, in partnerships as in Charness and Dufwenberg (2006) or dictator games as in Andreoni and Rao (2009). 8 Guarnaschelli, McKelvey, and Palfrey (2000) allowed for restricted communication, that is, deliberations taking the form of a straw poll vote (as in Coughlan (2000)). They found that voters tend to expose their private information less than theory predicts and the impact on jury outcomes is small. In contrast, the free-form communication allowed for in our experiments has a dramatic effect on jury outcomes.
898
J. K. GOEREE AND L. YARIV
mental observations in Section 4 in which we test for strategic voting. The collective outcomes generated by each institution, with and without the possibility to deliberate, are described in Section 5. A detailed analysis of the experimental communication protocols appears in Section 6. The protocols’ effects on experimental juries’ behavior is discussed in Section 7. Section 8 concludes. 2. EXPERIMENTAL DESIGN The underlying setup of our experimental design replicates the characteristics of Condorcet’s simple model. There is a “red” jar and a “blue” jar: the red jar contains seven red and three blue balls, and the blue jar contains seven blue and three red balls. Throughout the paper, we use the red (blue) jar as a metaphor for a guilty (innocent) defendant. At the start of each period, subjects are randomized into a group of nine subjects (who are assigned labels 1–9 randomly) and one of the jars is chosen by a toss of a fair coin. Subjects receive private information and ultimately need to cast a vote pertaining to their guess of which jar had been chosen and are each paid according to their own and their (eight) fellow group members’ guesses. There are four important components of our experimental design: the private information each subject gets, subjects’ ability to interact, the voting rule in place, and subjects’ preferences.9 Information: In each period, after the jar had been selected, each of the nine jurors in a group receives an independent draw (with replacement) from the jar being used. The color of the drawn ball matches the jar’s color with probability q = 07, commonly referred to as the accuracy of the private signal. Communication: In the no communication or “no-chat” treatments, subjects cast their guesses immediately after observing their private draws. In the communication or “chat” treatments, subjects can communicate with one another via a chat screen that automatically opens when subjects receive their private draws. They are able to direct their messages to a subset of their group or to the group as a whole (i.e., send a public message). Messages can take any form and communication is not restricted in time. When subjects are done chatting, they cast their votes for red or blue. Voting Rules: Once all votes have been received, they are automatically tallied to determine the group outcome. The voting rule, explained to the subjects at the outset of the experiment, is a threshold rule, where the red jar is the group choice if and only if at least (a prespecified) r red votes are submitted. There are three types of treatments, corresponding to three different voting rules: r = 5 (simple majority), r = 7 (two-thirds majority), and r = 9 (unanimity). Preferences: Subjects’ payoffs, which depend on whether the group decision matches the jar being used, vary by treatment. In the homogeneous treat9 The experimental instructions are available in the Supplemental Material (Goeree and Yariv (2011)).
EXPERIMENTAL STUDY OF COLLECTIVE DELIBERATION
899
ment, subjects’ preferences are completely aligned. In the heterogeneous treatment, subjects are randomly assigned (with equal probabilities) the role of weak red or weak blue partisan, which causes a misalignment in preferences. The weak red (weak blue) partisans are pre-disposed to choose the red (blue) jar or, in other words, require stronger information favoring the blue (red) jar to prefer it. This misalignment is even stronger in the partisan treatment, where jurors are assigned the role of strong red partisan with probability 1/6, a role in which the red outcome is preferred regardless of the realized jar. Subjects are informed of the ex ante distribution of preferences and their own realized preferences in each round (but not the full realization of preferences in their group). The top panel of Table I displays the payoffs (in cents) used in the different treatments. To summarize, the experiments employ a 3 × 3 × 2 design based on variations in voting rules, jurors’ preferences, and the availability of communication among the subjects. Each experimental session implemented one particular voting rule and one particular preference distribution. Within sessions, we conducted 15 periods without communication followed by 15 periods with communication (with one practice round preceding each). Three of the sessions were repeated with the chat periods preceding the no-chat periods to check for order effects. These “reverse order” sessions led to qualitatively identical insights as our baseline treatments. In our analysis below, we therefore pool the data from both types of sessions.10 The experiments were conducted at the California Social Sciences Experimental Laboratory (CASSEL) at UCLA. The bottom panel of Table I describes the number of subjects participating in each of the treatments (where summands correspond to separate sessions). Overall, 549 subjects participated. The average payoff per subject from the no-chat segment of each session was $9.53, while the corresponding average payoff in the chat segment was $13.11. In addition, each subject received a $5 show-up fee. 3. THEORETICAL PREDICTIONS Our experimental design matches the basic jury setup introduced by Feddersen and Pesendorfer (1998). Formally, consider a group of n = 2k + 1 individuals (subjects, jurors, etc.) who collectively choose one out of two alternatives, {red blue} (as suggested above, this can be understood as a metaphor for a choice between convicting or acquitting a defendant) using a threshold voting rule parameterized by r = 1 n. That is, red (convict) is chosen if and only if at least r agents vote in favor of it. In our experimental treatments, n = 9 and r = 5 7 9. At the outset, a state of nature is chosen randomly from {R B} (experimentally, red or blue jar; metaphorically, guilty or 10 Separate analysis of the sessions in which rounds with communication preceded the rounds without communication is available from the authors upon request.
900
TABLE I EXPERIMENTAL DESIGN Homogeneous
Heterogeneous
Neutral [1]
Weak Red Partisan [1/2]
Neutral [5/6]
True Jar Red
True Jar Blue
True Jar Red
True Jar Blue
True Jar Red
True Jar Blue
100 10
10 100
150 10
10 50
100 10
10 100
True Jar Red
True Jar Blue
True Jar Red
True Jar Blue
50 10
10 150
150 10
50 25
Weak Blue Partisan [1/2]
Jury choice red Jury choice blue
Subjects No chat, chat Chat, no chat
Strong Red Partisan [1/6]
r=5
r=7
r=9
r=5
r =7
r=9
r=5
r =7
r=9
36 N/A
27a + 36b + 45b N/A
27 18
18 + 45bc 27
27 + 45b 18
36 + 45bd N/A
27 N/A
36 N/A
36 N/A
a Chat treatment was run for 18 rounds (in addition to the practice round). b Only the no-chat treatment was run. c Chat treatment was run for only 9 rounds (in addition to the practice round). d This session was run for only 9 rounds (in addition to the practice round).
J. K. GOEREE AND L. YARIV
Preferences Jury choice red Jury choice blue
Partisan
EXPERIMENTAL STUDY OF COLLECTIVE DELIBERATION
901
innocent defendant), and individuals’ private preference types are randomized from T = {neutral weak red partisan weak blue partisan strong red partisan} according to the prior probability p = (pN pWR pWB pSR ). Utility mappings for each type are determined naturally according to Table I. After preference types had been determined, each agent observes a conditionally independent signal s ∈ {red blue} of accuracy q That is, Pr(s = red|R) = Pr(s = blue|B) = q where q = 07 in all our experimental treatments. After observing all of their private information (composed of preference type and signal), when communication is not available, agents vote simultaneously, the group choice is determined according to r, and agents’ earnings are determined accordingly. In our experimental design, each treatment corresponds to a different prior p. In particular, in the homogeneous treatment, pN = 1 in the heterogeneous treatment, pWR = pWB = 12 , and in the partisan treatment, pN = 56 and pSP = 16 A strategy is then a mapping σ : T × {red blue} → [0 1], which associates a probability of choosing red (or convict) for each realization of private preference type and revealed signal. We concentrate on symmetric responsive equilibria in which agents of the same extended type (comprising preference type and private signal) use the same strategy, and not all extended types use the same strategy. Using the techniques of Feddersen and Pesendorfer (1998), we identify the equilibrium strategies generated by the assortment of our experimental sessions. Consider first the homogeneous treatments. When pN = 1 and r = k + 1 the unique symmetric equilibrium entails agents following their signals, that is, selecting red (blue) when observing red (blue), as in Austen-Smith and Banks (1996). Intuitively, if all agents follow their signals, then a pivotal agent knows that precisely k agents observed the signal red and k agents observed the signal blue. These signals cancel one another, and the agent best responds by following her own signal. For r > k + 1 this sincere behavior is no longer part of an equilibrium. Indeed, if all agents vote sincerely, then pivotality implies that there are at least two more red signals in the group, implying a best response of red regardless of one’s signal. As it turns out, for r > k + 1 the unique responsive equilibrium entails agents with a red signal voting red and those with a blue signal mixing between a red and a blue vote. Let the equilibrium probability of choosing red when observing a blue signal be α Then, after simplifying terms, we get Pr(red|pivotal) = Pr(red|r − 1 red votes, n − r blue votes) = [q + (1 − q)α]r−1 [(1 − q)(1 − α)]n−r / [q + (1 − q)α]r−1 [(1 − q)(1 − α)]n−r + [1 − q + qα]r−1 [q(1 − α)]n−r
902
J. K. GOEREE AND L. YARIV
which, for indifference, must equal q. The solution of this equality for different values of q n and r identifies the corresponding equilibria, as they appear in the top panel of Table II for q = 07 n = 9 and r = 7 9. The analysis of the heterogeneous and partisan treatments is similar in spirit and, therefore, is omitted. Table II summarizes all equilibrium predictions germane to our no-communication experimental sessions, as well as the probabilities of the different errors, associated with choosing R (red or convict) when the state is actually B (blue or innocent) or, alternatively, choosing B (blue or acquit) when the state is actually R (red or guilty).11 The former is often referred to in the jury literature as the probability of convicting the innocent, which is thus denoted Pr(C|I) while the latter is referred to as the probability of acquitting the guilty and denoted Pr(A|G) 4. STRATEGIC VOTING 4.1. Aggregate Analysis We start by considering the extent to which subjects behaved strategically. Table III summarizes the relevant results for all sessions. Numbers in parentheses correspond to theoretical predictions.12 As will be seen in Section 6, in the treatments that allow for communication, subjects revealed their private signals at very high rates across treatments. We therefore report the aggregate choices in those sessions as a pair of percentages x%/y%, where x% (y%) is the appropriate percentage of choices when, given the agent’s preferences and the entire signal profile, the optimal decision was red (blue). Thus, a best response to truthful revelation would constitute of the pair 100%/0%.13 Strong partisans had a dominant action entailing a vote for red; therefore, we report their aggregate choices only.14 Last, for the homogeneous case, there is an appealing equilibrium (in terms of Pareto optimality or efficiency) in which all 11 The multiplicity of equilibria in the heterogeneous case when r = 7 or r = 9 is inherent for symmetric settings in which there are weak red and weak blue partisans. In particular, this multiplicity could not be avoided by specifying different symmetric rewards for correct matches between group choice and actual states for both types of partisans. 12 Since there are multiple equilibria for the heterogeneous treatment, we do not include any theoretical predictions for the corresponding sessions. The theoretical error predictions are based on the equilibrium strategies and realized signal profiles in the experimental sessions. 13 For instance, in the heterogeneous treatment, red types require only 4 out of 9 signals to be red for red to be the optimal choice. So, for example, under simple majority (r = 5), 93% of the time in which there were at least 4 red signals and a red type received a red signal, she voted red. Similarly, blue types require 6 out of 9 red signals to prefer red over blue and numbers are calculated accordingly. 14 Partisan subjects did not always use their dominant action. This can be explained by either a desire to conform or match the winner (see Goeree and Yariv (2007)) combined with probability matching (Siegel and Goldstein (1959)), or some form of altruism (particularly in the case of the two supermajoritarian rules), as in Feddersen, Gailmard, and Sandroni (2009). We return to their behavior in some of the individual-level analysis below.
TABLE II
Homogeneous
Red votes with red signals Red votes with blue signals Pr(red|blue) [= Pr(C|I)] Pr(blue|red) [= Pr(A|G)]
r=5
r=7
1 0 0099 0099
1 0.31 0.108 0.280 Eq. (1)
Heterogeneous
Partisan
Eq. (2)
r=9
1 0.77 0.206 0.474
Eq. (3)
Eq. (4)
Eq. (1)
Eq. (2)
Eq. (3)
Weak red partisans Red votes with red signals Red votes with blue signals
1 1
1 1
1 1
1 032
1 1
1 1
1 1
1 1
Weak blue partisans Red votes with red signals Red votes with blue signals Pr(red|blue) [= Pr(C|I)] Pr(blue|red) [= Pr(A|G)]
0 0 05 05
041 0 0166 0678
064 0 0224 0472
0 0 0002 0972
0 0 05 05
014 0 0003 0995
002 0 0002 0998
0 0 05 05
Strong red partisans Red votes with red signals Red votes with blue signals
1 1
1 1
1 1
Neutrals Red votes with red signals Red votes with blue signals Pr(red|blue) [ = Pr(C|I)] Pr(blue|red) [ = Pr(A|G)]
097 0 0286 0064
1 0.18 0.113 0.275
1 0.72 0.201 0.48
EXPERIMENTAL STUDY OF COLLECTIVE DELIBERATION
THEORETICAL PREDICTIONS
903
904
TABLE III STRATEGIC VOTING ACROSS TREATMENTS Without Communication
With Communication
r=7
r=9
r=5
r=7
r=9
540 60
1620 180
675 75
540 60
486 54
675 75
Red votes with red signal Red votes with blue signal
91% (100%) 7% (0%)
89% (100%) 24% (31%)
90% (100%) 39% (77%)
99%/36% 69%/5%
98%/10% 91%/5%
98%/10% 95%/4%
Wrong jury outcomes True jar blue True jar red
10% (8%) 7% (10%) 13% (6%)
35% (22%) 5% (10%) 60% (30%)
48% (40%) 0% (21%) 97% (54%)
10% [7%] 16% [9%] 4% [4%]
7% [5%] 7% [3%] 7% [6%]
8% [8%] 5% [8%] 11% [8%]
Heterogeneous Number of individual decisions Number of group decisions
1080 120
1350 150
945 105
675 75
675 75
540 60
Red types Red vales with red signal Red votes with blue signal
86% 37%
88% 44%
91% 49%
93%/25% 78%/3%
82%/33% 56%/6%
84%/6% 72%/11%
Blue types Red votes with red signal Red votes with blue signal
64% 16%
59% 15%
62% 19%
91%/44% 91%/13%
84%/28% 73%/12%
84%/38% 73%/24%
Wrong jury outcomes True jar blue True jar red
23% 24% 23%
41% 4% 83%
60% 0% 100%
7% 11% 3%
13% 3% 22%
30% 10% 52%
Homogeneous Number of individual decisions Number of group decisions
(Continues)
J. K. GOEREE AND L. YARIV
r=5
Without Communication
With Communication
r=5
r=7
r=9
r=5
r=7
r =9
Partisan Number of individual decisions Number of group decisions
405 45
540 60
540 60
405 45
540 60
540 60
Neutral types Red votes with red signal Red votes with blue signal
91% (97%) 18% (0%)
90% (100%) 21% (18%)
71% (100%) 28% (72%)
100%/16% 95%/2%
98%/41% 82%/17%
80%/33% 38%/8%
Partisan types Red votes with red signal Red votes with blue signal
81% (100%) 57% (100%)
90% (100%) 45% (100%)
68% (100%) 50% (100%)
83% (100%) 48% (100%)
90% (100%) 47% (100%)
82% (100%) 42% (100%)
Wrong jury outcomes True jar blue True jar red
27% (23%) 36% (35%) 12% (3%)
25% (16%) 3% (9%) 48% (23%)
9% 12% 5%
13% 21% 8%
15% 4% 22%
43% (34%) 0% (20%) 100% (52%)
EXPERIMENTAL STUDY OF COLLECTIVE DELIBERATION
TABLE III—Continued
905
906
J. K. GOEREE AND L. YARIV
agents reveal their signals and vote for the commonly preferred alternative. The errors that would have resulted in the experiment with such behavior are reported in the square brackets in the top panel. There are several insights one gains by inspecting Table III. First of all, in the homogeneous and partisan no-communication treatments, behavior generally follows the comparative statics (if not the precise numbers) predicted by theory. In particular, voting against one’s blue signal under rules r = 7 and r = 9 is significantly different than 0 for any conventional levels of confidence. Furthermore, voting against a blue signal increases in a significant way with the voting rules (again, for any conventional levels of confidence).15 Nonetheless, in all of our treatments, subjects took at least 20% longer to make a decision when ultimately voting against their signal, suggesting that voting against one’s signal may involve a more complex cognitive process.16 The qualitative deviations from the theoretical predictions pertain to the probability of convicting an innocent defendant (i.e., the probability that the group outcome is red when the blue jar is being used).17 In the homogeneous and partisan no-communication treatments, this probability declines with the size r of the supermajority needed for conviction (a choice of red). This comparative static, which is not predicted by theory, has been observed before in the experiments of Guarnaschelli, McKelvey, and Palfrey (2000), who focused on simple majority and unanimity. Furthermore, under unanimous voting rules (r = 9), convictions (red choices) are hardly observed, and so wrong convictions (Pr(C|I)) are rare. Indeed, without the ability to communicate, it is hard to achieve a unanimous profile of votes. This is important from a policy perspective, as the levels of Pr(C|I) are often the object of minimization when assessing institutions. In the lab, absent deliberation, unanimous rules generate very low innocent convictions (see also Guarnaschelli, McKelvey, and Palfrey (2000)). Looking at the communication treatments, Table III illustrates that subjects respond to the entire profile of signals available in their group, although they appear to place too much weight on their own signals (conditional on full revelation). This ties to the reduced overall probabilities of wrong outcomes when communication is available. Note, however, that under unanimity, the probabilities of wrong outcomes when the jar is blue (wrongful convictions) are 15 Results for homogeneous preferences can readily be compared to those obtained by Guarnaschelli, McKelvey, and Palfrey (2000) for groups of size 3 and 6, and majoritarian and unanimous voting rules. Our observations are consistent with those reported there. 16 Voting with the signal took an average of 414, 551 and 305 seconds within the homogeneous, heterogeneous, and partisan treatments, respectively. Voting against one’s signal took an average of 513, 722 and 367 seconds within the respective homogeneous, heterogeneous, and partisan treatments. All differences were significant at any reasonable level. 17 The theoretical values concerning wrong decisions (bottom three rows in each panel) capture the probabilities that would have been generated had subjects used the theoretical equilibrium strategies for the experimental signal realizations.
EXPERIMENTAL STUDY OF COLLECTIVE DELIBERATION
907
significantly higher with communication than without at any conventional confidence level. Indeed, as will be shown below, subjects can more easily create a majority, super-majority, or even a unanimous vote for red when deliberation precedes choice. Throughout the paper, we report results from all sessions. It is important to note that when looking at sessions in which the order of the communication and no-communication treatment was reversed, we see very little difference in strategic behavior18 and wrong jury outcomes occur at similar, though slightly lower, frequencies. 4.2. Individual Behavior To uncover the determinants of strategic voting and to test for learning, we estimate a discrete choice model on each individual’s decision to vote red as a function of several explanatory variables. In addition to dummy variables corresponding to voting rules 7 and 9, we consider several additional dummy variables: red sample takes the value 1 when the subject’s signal is red; red type takes the value of 1 when the subject is a weak red partisan in the heterogeneous treatments, and when the subject is a strong partisan in the partisan treatments; past wrong blue dec(ision) takes the value of 1 when blue was the outcome in the previous round and ended up not coinciding with the realized state, and thereby allows us to identify reinforcement forces; late allows us to account for learning by taking the value of 1 when the decision is taken in the last 5 periods of the session. In addition, number of red signals captures the number of red signals in the group, and we consider several natural interaction terms. Table IV contains the marginal effects that correspond to our estimations (where errors are clustered by subject). Several insights come out of these estimations. First, and in line with our aggregate analysis, subjects put significant weight on their private information captured by our red sample variable. They do so in a significantly more prominent manner in the treatments without communication. As we will see below, subjects frequently reveal their private information in the communication treatments. Therefore, the number of red signals variable is a proxy for the public information available in the communication treatments. Table IV illustrates the significant impact of the group’s information whenever communication is possible (in fact, in the homogeneous and heterogeneous treatments, two additional red signals within the group influence behavior approximately as much as a private red signal, while in the partisan treatment an additional red signal in the group outweighs the effect of a private red signal). Second, 18 For the sessions with homogeneous preferences and r = 9 in which reversed sessions were run and theoretical predictions are unique, looking at votes for red with red signal and with blue signal, we get p-values that correspond to differences in the baseline sessions of 082 and 062, respectively.
908
J. K. GOEREE AND L. YARIV TABLE IV PROBIT ESTIMATIONS THAT EXPLAIN RED INDIVIDUAL DECISIONSa Preferences Homogeneous Communication:
Red sample Past wrong blue decision Rule 7 Rule 9
No
0504∗∗∗
(0069) −0044 (0043) 0271∗∗∗ (0103) 0385∗∗∗ (0066)
(0072) −0088 (0122) −0426∗ (0219) −0611∗∗ (0296) 0220∗∗∗ (0027) −0422∗∗∗ (0076) −0485∗∗∗ (0074) −0644∗∗∗ (0187) 0076 (0121) −0003 (0093) 0094 (0093) 0158 (0167) 0152∗∗ (0064) 0136∗∗ (0054) 0219∗∗∗ (0085)
Number of red signals Red sample ∗ rule 7 Red sample ∗ rule 9 Late Late ∗ red sample Late ∗ rule 7 Late ∗ rule 9 Late ∗ past wrong blue dec
Yes
0814∗∗∗
−0311∗ (0184) −0449∗∗∗ (0131) 00003 (0057) 0049 (0045) 0070 (0067) 0012 (0059) −0122 (0082)
Late ∗ number of red signals Number of red signals ∗ rule 7 Number of red signals ∗ rule 9 Red type Red type ∗ past wrong blue dec Red type ∗ rule 7 Red type ∗ rule 9 Red type ∗ red sample
Heterogenous No
0514∗∗∗ (0047) −0005 (0036) 0001 (0062) 0052 (0062)
−0029 (0081) −0050 (0079) 0026 (0049) −0083∗ (0045) −0048 (0057) −0072 (0062) 0003 (0059)
0257∗∗∗ (0048) 0016 (0039) 0084 (0055) 0088 (0075) 0031 (0038)
Red type ∗ number of red signals Pseudo-R2 Observations
0.376 2835
0.710 1701
0.218 3375
Yes
0248∗∗∗ (0071) −00002 (0070) 0104 (0168) 0228 (0183) 0221∗∗∗ (0030) −0051 (0106) −0162∗∗ (0079) −0292∗∗ (0120) −0184∗∗ (0073) 0078 (0052) 0219∗∗∗ (0056) 0033 (0134) 0052∗∗ (0024) −0047 (0040) −0054 (0042) −0106 (0105) 0080 (0121) 0010 (0075) 0019 (0082) 0007 (0063) 0055∗∗ (0023) 0.465 1890
Partisan No
Yes
0711∗∗∗
0226∗∗∗ (0082) 0209∗∗∗ (0060) 0644∗∗∗ (0099) 0234 (0218) 0360∗∗∗ (0043) 0055 (0093) −0061 (0099) −0320∗ (0185) −0072 (0071) 0155∗∗ (0072) 0206∗∗∗ (0078) −0041 (0154) 0066∗ (0037) −0186∗∗∗ (0050) −0067 (0058) 0451∗∗∗ (0036) −0182 (0202) −0449∗∗∗ (0125) −0463∗∗∗ (0126) −0063 (0114) −0050∗ (0027)
(0070) −00328 (0058) 0010 (0105) 0119 (0085)
0025 (0157) −0349∗∗∗ (0120) 0014 (0056) −0075 (0065) −0004 (0087) −0044 (0080) −0052 (0108)
0309∗∗∗ (0070) −0007 (0122) −0032 (0106) −0059 (0095) −0353∗∗∗ (0069)
0.28 1485
0.620 1485
a Robust standard errors are given in parentheses. *Significant at 10% level; **significant at 5% level; ***significant at 1% level.
EXPERIMENTAL STUDY OF COLLECTIVE DELIBERATION
909
voting rules have some effect on behavior and response to private signals, but the effect is limited and appears most dominant in the homogeneous preference treatments. Third, types have some effect on behavior, particularly in treatments with strong red partisans. In these treatments, partisan subjects, for whom a red vote is a weakly dominant action, vote red at a significantly greater frequency (notably under the non-unanimous voting rules). Last, learning seemed to play a limited role. Indeed, behavior in later periods is, for the most part, not significantly different than early behavior when communication is unavailable. With communication, subjects did tend to choose the red action less frequently at later periods. Nonetheless, the reaction to the environment (as captured by the interaction terms) did not change significantly across the experimental periods. In relation to our theoretical predictions, note that in the treatments without communication, individual equilibrium choices depend on the voting rule, the private sample, and the private preference type. This conforms with what we observe using our regression analysis, implying again a qualitative match of our subjects’ behavior with the theoretical predictions when communication was unavailable. In what follows, we analyze how this individual behavior aggregates into group decisions, which will allow us to assess outcomes of the institutions we consider. 5. VOTING OUTCOMES A natural object when comparing institutions is the resulting outcome, that is, the mapping from the characteristics of the group (preferences, information, etc.) to final decisions (e.g., probabilities of conviction in a jury). Theoretically, without communication, the different voting rules generate different outcomes for any of the preference distributions (see Table II). On the other hand, the availability of free-form communication yields an equivalence of the set of outcomes generated by intermediate voting rules (and to a subset of outcomes under unanimity). Comparison of outcomes is particularly important when making policy decisions. It is the natural basis upon which to choose one institution over the other, as it captures information about the likelihood of specific decisions (say, conviction or acquittal) for particular profiles of agents (e.g., jurors’ political stands) and available information (such as testimonies). We start with the homogeneous treatments, which are the easiest to analyze in that characteristics of the group can be fully summarized by the number of red signals in the group. In these treatments, symmetry assures that outcomes are encapsulated formally by the correspondence between the number of red signals in the group and the eventual probability of collectively choosing the red jar. Table V contains the experimental outcomes with and without communication.
910
J. K. GOEREE AND L. YARIV TABLE V
FREQUENCY OF RED CHOICES/CONVICTIONS WHEN PREFERENCES ARE HOMOGENEOUSa Without Communication Number of Red Signals
0 1 2 3 4
r=5
— (0) 0% (3) 0% (12) 0% (9) 25% (4)
5 6 7 8 9
56% (9) 100% (8) 100% (7) 100% (7) 100% (1)
r=7
With Communication r=9
0% (2) 0% (11) 0% (30) 0% (21) 0% (19)
0% (2) 0% (8) 0% (10) 0% (11) 0% (8)
24% (25) 29% (31) 54% (24) 81% (11) 100% (6)
0% (9) 0% (12) 0% (9) 0% (5) 100% (1)
r =5
— (0) 0% (4) 0% (9) 0% (10) 29% (7) 100% (4) 100% (9) 100% (9) 100% (7) 100% (1)
r =7
r =9
— (0) 0% (5) 0% (4) 0% (8) 10% (10)
0% (1) 0% (12) 0% (9) 0% (8) 0% (7)
50% (4) 100% (9) 100% (10) 100% (3) 100% (1)
60% (5) 100% (17) 100% (11) 100% (4) 100% (1)
a Parentheses contain the corresponding number of observations.
Table V illustrates the stark differences between outcomes that institutions can impose when communication is not available. For simple majority (r = 5), the empirical outcome approximates the statistically efficient outcome (prescribing a guess of red with 100% probability whenever 5 or more signals within the group are red, and a guess of blue, that is, a guess of red with 0% probability, otherwise) rather well. However, under unanimity, subjects are unable to reach a consensus of red votes and the resulting outcome yields significantly less efficient outcomes. The availability of communication overturns these results. Once communication is available, empirical outcomes are both nearly efficient as well strikingly similar across the different voting rules. Outcomes coincide across all voting rules when there are less than 4 or more than 5 red signals. When there are 4 or 5 signals, rule r = 5 generates different outcomes than the other rules r = 7 and r = 9 which generate outcomes that are not significantly different from one another (with a p-value of 0518 corresponding to the null that the two rules do not generate different outcomes).19 In fact, a (nonparametric) Fisher exact probability test on group decisions rejects outcomes being identical across voting rules without communication when the number of red signals is 5–8 at conventional significance levels. When communication is available, no pairwise comparison, for any number of red
19
While communication may seem simple to conduct when agents share preferences, a large segment of the theoretical literature analyzing institutions has focused on this particular case. The results suggest the importance of accounting for communication in such circumstances.
EXPERIMENTAL STUDY OF COLLECTIVE DELIBERATION
911
signals or two voting rules, generated a difference significant with 10% confidence.2021 When preferences are heterogeneous, the analysis is complicated by the fact that it matters who holds either kind of signal. For example, a weak red partisan observing a red signal may affect decisions differently than a weak blue partisan observing a red signal. The effect of communication on outcomes is illustrated in Table VI, which shows the percentage of red choices (convictions) when the majority of signals in the group are red or blue for the different treatments, together with their 95% confidence intervals (approximating a normal distribution). Table VI highlights the observation that groups are highly responsive to the majority of signals within the group. For non-unanimous rules, whenever the majority of signals are red, the probability the group outcome is red exceeds 84%, regardless of the preference distribution and voting rule. Whenever the majority of signals are blue, the probability the group outcome is red is lower than 13% for all preference distributions and voting rules (including unanimous ones). In particular, the outcomes corresponding to different rules appear rather similar.2223 20
While the numbers reflecting rates of red choices as a function of number of red signals do not, strictly speaking, represent a cumulative distribution, they are monotonically increasing from 0 to 1 If one were to then use the (nonparametric) Kolmogorov–Smirnov test, similar results would emerge when the null is taken to be that two voting rules are identical. The values corresponding to any two rules when communication is unavailable are lower than 00001 When communication is available, the comparison of rules 5 and 7 leads to a value of 0466 of rules 5 and 9 to a value of 0255, and of rules 7 and 9 to a value of 1 21 We note that similar conclusions can be drawn using regression analysis. Indeed, suppose a group’s decision (a dummy achieving the value of 1 when the group decision is red) is explained by the voting rule in place (accounted for by two of the voting rules, say, r = 7 and r = 9 or r = 5 and r = 9) when controlling for the number of red signals being 4 or 5 (and their interactions with the voting rules). The corresponding probit regression yields all of the coefficients regarding voting rules as not significantly different from 0. 22 In fact, looking at the 95% confidence intervals, we gain very similar insights. With the exception of unanimous voting with heterogeneous preferences, the lower bound of the 95% confidence interval corresponding to a majority of red signals in the group exceeds 80% across all treatments. Similarly, the upper bound of the 95% confidence interval corresponding to a majority of blue signals in the group lies below 17% across all treatments (for the homogeneous and heterogeneous treatments, it is below 10%). 23 Kolmogorov–Smirnov tests generate similar messages. Without communication, a Kolmogorov–Smirnov test to compare group decisions across voting rules leads to a rejection of the null hypothesis that outcomes are the same across voting rules when communication is not available (at any conventional level of significance). Kolmogorov–Smirnov tests do not reject the coincidence of outcomes across voting rules r = 5 and r = 7 when conditioning on the more prevalent signal within the group. Outcomes from voting rule r = 9 are significantly different than those corresponding to rules r = 5 and r = 7 when the majority of red signals are red in the heterogeneous treatment (at 5% level) and the partisan treatment (at 10% level). For those treatments, unanimity generates significantly less red outcomes (convictions) when the information suggests red (guilt) is more likely. In all other cases of Table VI, voting rule r = 9 generates statistically similar outcomes to those produced under rules r = 5 7
912
TABLE VI
Preferences Homogeneous Voting Rule:
r=5
r=7
Heterogeneous r =9
r =5
r=7
Partisans r=9
r=5
r=7
r=9
Majority red 100% 92.6% 94.7% 97.6% 84.2% 51.7% 100% 97.2% 84.4% signals [100%, 100%] [89.3%, 95.9%] [92.4%, 97.1%] [96.0%, 99.1%] [80.3%, 88.1%] [45.7%, 57.8%] [100%, 100%] [95.4%, 99.0%] [80.2%, 88.6%] Majority blue 6.7% 3.7% signals [3.7%, 9.6%] [1.3%, 6.1%]
0% [0%, 0%]
a Square bracket contain corresponding 95% confidence intervals.
5.9% [3.2%, 8.5%]
2.7% [1.0%, 4.4%]
6.5% 4% 12.5% 10.7% [3.6%, 9.3%] [1.4%, 6.6%] [8.1%, 16.9%] [6.9%, 14.5%]
J. K. GOEREE AND L. YARIV
PERCENTAGE OF RED CHOICES/CONVICTIONS WITH COMMUNICATIONa
EXPERIMENTAL STUDY OF COLLECTIVE DELIBERATION
913
To conclude, without communication different voting rules yield significantly different group outcomes. The availability of communication reduces the effects of voting rules on outcomes. Specifically, non-unanimous voting rules generate similar outcomes in all of our experimental circumstances. Unanimous rules make it harder for groups to achieve the red outcome (conviction) and, therefore, appear different at times when the majority of signals in the group are red. Even this difference vanishes when preferences are homogeneous. In terms of efficiency, individuals’ response to group information is echoed in the generated outcomes that are significantly more efficient in the presence of communication. From a policy perspective, this suggests that deliberation may be an important instrument for design, and, when introduced, voting rules in and of themselves may be far less so. In the next section we analyze the communication protocols that emerged and gain more understanding regarding how group outcomes are determined in the presence of communication. 6. COMMUNICATION PROTOCOLS 6.1. Aggregate Protocol Characteristics We start by reporting general properties of the communication protocols. Table VII summarizes the percentage of agents reporting truthfully their signals, misreporting their private signals (in the “lies” rubric), or not revealing anything regarding their private information. Furthermore, we account for the percentage of messages (truthful or not) that were sent publicly to the entire group.24 As can be seen, across treatments, a striking percentage of subjects reveal their signals truthfully and almost all subjects send messages to their entire group. These results contrast with those regarding voting without communication. While subjects are perfectly capable of behaving strategically when casting a vote, they are not very strategic when sending messages. Indeed, given that subjects react to group signals in a substantial way (see, e.g., Table IV), partisan subjects in the heterogeneous or partisan treatments would have an incentive to misrepresent signals that go against their leaning.25 Table VII also reports the average number of messages conveying signal realizations and the average number of messages conveying individual types (that 24
The coding was done for the sessions in which no communication preceded the communication treatments. All coding was done by two independent research assistants who were not privy to our research questions. 25 This is consistent with “excessive” truthful reporting observed in other experimental setups, such as the Crawford and Sobel (1982) setting; see Cai and Wang (2006).
914
J. K. GOEREE AND L. YARIV TABLE VII AGGREGATE MESSAGE PROFILES Messages Truthful
Lies
Nothing
Public
Average Number of Signal Messages
Average Number of Type Messages
Homogeneous r=5 r=7 r=9
90% 98% 98%
10% 2% 2%
0% 0% 0%
100% 96% 100%
867 849 1388
— — —
Heterogeneous r=5 r=7 r=9
88% 88% 89%
12% 10% 10%
0% 2% 1%
100% 100% 100%
1379 1550 867
0.21 0.77 1.20
Partisan r=5 r=7 r=9
93% 89% 92%
6% 9% 8%
1% 2% 0%
100% 100% 100%
792 791 809
0.04 0.31 0.16
are relevant for the heterogeneous and partisan treatments). The former is significantly greater than the latter. In fact, type revelation occurs very rarely. For example, in the partisan treatments, the average number of types revealed is significantly lower than 05 with any conventional significance levels. It is worth noting that in the homogeneous treatments, unanimous chat sessions were (insignificantly) faster than majoritarian ones. The average round length under unanimity (majority) was 39 ± 9 (55 ± 9) seconds.26 In the heterogeneous treatments, however, communication was significantly longer under unanimity (96 ± 13 seconds) than under simple majority (26 ± 11 seconds) or 2/3 supermajority (36 ± 13 seconds). 6.2. Sequencing To gain insights regarding the endogenous formation of communication protocols, we identified messages that contained information about private signals and messages that had to do with suggestions regarding how the group or particular individuals should act. 27 Figure 1 depicts the sequencing of messages as follows. We normalized the length of all conversations within a treatment to 20 periods. For each period, 26 This relates to Blinder and Morgan (2005), who conducted an experiment in which groups were required to solve two problems: a statistical urn problem and a monetary policy puzzle. The groups could converse before casting their votes. They found no significant difference in the decision lag when group decisions were made by majority rule relative to when they were made under a unanimity requirement. See Cooper and Kagel (2005) for another related study. 27 Again, these were coded by an independent research assistant.
EXPERIMENTAL STUDY OF COLLECTIVE DELIBERATION
915
FIGURE 1.—Sequencing within communication protocols. The x axis denotes normalized period and the y axis denotes percentage of signal or suggestion messages on left or right panels, respectively.
916
J. K. GOEREE AND L. YARIV
we calculated the percentage of messages sent that contained signals or suggestions as described above. Each rubric of the figure corresponds to a different treatment and contains two graphs: the left one depicting the evolution of signal messages; the right one illustrating the evolution of suggestion messages.28 Roughly speaking, conversations are consistently composed of two phases. First, subjects exchange information. Later, they converse about how to act on the collective information. This depiction is true across the different preference settings and the different voting rules. This split into phases allows us to identify “leaders,” subjects who consistently make suggestions for group and individual ultimate decisions. As it turns out, leaders do not always appear. Some sessions had unique individuals who sent numerous messages (namely, the homogeneous treatment with simple majority or the partisan treatment with unanimity). In other treatments, no clear leaders appeared. We suspect that the emergence of leaders, while certainly a possibility when communication is available, is group specific.29 6.3. Communication Volume and Outcomes We now inspect the relation between the volume of communication and the accuracy of decisions. Table VIII describes the average number of signals, the average number of overall messages (termed chat length), and the percentage of messages pertaining to observed signals in all treatments, for group decisions that matched the actual state (so-called correct) and group decisions that did not match the actual state (so-called incorrect). As can be seen from the table, while the number of signals transmitted is not significantly correlated with the groups’ accuracy, the length of conversation as well as the percentage of signals transmitted within the conversation are significantly correlated with decision accuracy. Indeed, correct decisions are associated with shorter communication phases and, consequently, greater fractions of the conversations being dedicated to the transmission of information. 7. GROUP BEHAVIOR AND SUPERMAJORITIES One reasoning for the equivalence of voting rules when free-form communication is available is that agents can simply circumvent the voting rule by deciding which alternative they would like to implement during deliberations 28 Since preference types were rarely revealed as described above, we do not include them in Figure 1. 29 For the jury context, the sessions in which leaders emerged may be particularly germane. Indeed, in many U.S. courts, a jury foreperson is nominated, either by the jury itself or by the judge. The jury’s foreperson effectively acts as leader, having control over some of the deliberation process as well as serving as the jury’s delegate in all communications with the judge in charge (see, e.g., Abbott and Batt (1999)).
EXPERIMENTAL STUDY OF COLLECTIVE DELIBERATION
917
TABLE VIII VOLUME OF CHATS AND DECISION ACCURACY Homogeneous
Signals
Correct Incorrect
r=9
Heterogeneous
r=5
r=7
r=5
r=7
r=9
931 867
896 1519 1435 1642 857 900 933 1760 1714 1083
Partisan r=5
r=7
r=9
839 900
846 825
855 W = 77, 911 p < 048
Wilcoxon
Chat length Correct 2096 1936 3250 2393 4050 3000 1190 1727 1867 W = 57, Incorrect 3067 4425 3400 3800 4729 3806 3050 2863 3144 p < 001 % signals
Correct Incorrect
044 028
046 020
047 027
060 046
041 036
029 028
070 030
049 029
046 W = 54, 029 p < 0004
and then voting unanimously for that alternative. A slight subtlety arises for unanimous voting rules for which unanimous choices in the voting stage are not robust to unilateral deviations (hence, the equivalence pertains only to intermediate voting rules, and the unanimous voting rules generate a subset of outcomes). Figure 2 depicts the cumulative distribution function corresponding to all possible supermajorities (5–9) for all treatments. Note that for all of our treatments, the cumulative distribution functions corresponding to the treatments without communication (solid lines) are stochastically dominated by those corresponding to treatments with communication (dashed lines). Furthermore, the cumulative distribution functions relating to the no-communication treatments are concave, while those relating to the communication treatments are convex. This captures the fact that when communication is not available, most outcomes are achieved with small supermajorities (in fact, the modal outcome is achieved with a 5 or 6 supermajority), while with communication most outcomes are achieved with large supermajorities (indeed, the modal outcomes are achieved with 8 or 9 supermajorities). Table VII illustrated a high percentage of subjects revealing their signals truthfully. Furthermore, Table VI demonstrated the match between group decisions and the majority of reports in the communication stage. These numbers exceed 85% in all treatments with intermediate voting rules. These combined with the evidence captured in Figure 2 are suggestive of a heuristic process underlying the groups’ decision-making algorithm. Namely, subjects share their private information and then unanimously (or almost unanimously) select the alternative supported by the majority of the signals. 8. CONCLUSIONS We report observations from an array of experiments that assess the joint impacts of heterogeneous preferences, voting rules, and the availability of communication on group (jury) outcomes. Several important insights emerge from
918 J. K. GOEREE AND L. YARIV
FIGURE 2.—Cumulative distribution functions for size of supermajorities acting in consensus.
EXPERIMENTAL STUDY OF COLLECTIVE DELIBERATION
919
our analysis. First, in the absence of communication, individuals behave strategically much in the spirit of theoretical jury models and, consequently, different voting rules yield different outcomes. Second, deliberation makes voting rules less crucial for outcomes, particularly non-unanimous ones. This is especially true when preferences of individuals are aligned. Last, communication protocols have consistent characteristics: messages are public and truthful, they are a powerful determinant of the collective choice, and they are broadly divided into two phases: first, information is shared and next, a discussion ensues as to how to aggregate that information into a group decision. The observed similarity in outcomes for non-unanimous experimental juries is consistent with the high variance of non-unanimous voting rules specified in U.S. civil jurisdiction, where non-unanimous decision rules range anywhere from simple majority to 7/8 majority. Beyond the jury context, the results are valuable for any collective decision-making in which individuals communicate prior to taking decisions, be it faculty making hiring decisions, managerial teams making investment decisions, political entities deciding on policies, and so on and so forth. The insights of the paper suggest the importance of using communication as an instrument in institutional design in conjunction with voting rules. Indeed, imposing restrictions on deliberation protocols may be an important avenue for generating desirable collective outcomes. Put differently, while much of the focus of the literature on collective decision making is on agents who are pivotal during the voting stage, understanding the agents who are effectively pivotal in the communication stage could be equally important. In fact, in practice, in many environments, agenda setting plays an important role in the design of collective decisions. In a way, an agenda can be thought of as a predetermined communication protocol, which, as the experimental results advise, may be crucial for generating sought-after outcomes. In fact, even without restricting protocols, the consistent sequencing of endogenous protocols we observe opens the door to new questions regarding institutional design. So far, the theoretical literature on deliberative voting has assumed that communication is either very short (entailing one round of communication, as in Austen-Smith and Feddersen (2005)) or is free-form (as in Gerardi and Yariv (2007)), much like in the experiments.30 Theoretical results suggests that when communication protocols are unrestricted (e.g., Gerardi and Yariv (2007)), intermediate voting rules are equivalent in terms of the set of sequential equilibrium outcomes they generate. Under unanimity, only a subset of the outcomes that can result with intermediate voting rules can be implemented. These results illustrate the potential 30 One exception is Lizzeri and Yariv (2011), who studied protocols resembling the two-stage ones observed in our experiments. In their setup, agents first need to decide when to halt costly communication that generates public information. Agents then collectively choose an action. The paper identifies environments in which different decision rules generate identical predictions.
920
J. K. GOEREE AND L. YARIV
effects of communication on collective outcomes, but offer little guidance on the precise product of the collective process. Our experimental results suggest stronger impacts of communication: the selected outcomes are the same across institutions. We suspect that this is due to the particular format the observed (endogenous) communication protocols take. In that respect, our study suggests the importance of comparing different institutions with protocols that are in between the two polar specifications commonly studied: one-shot and fully unrestricted. REFERENCES ABBOTT, W. F., AND J. BATT (1999): A Handbook of Jury Research. American Law Institute– American Bar Association. [916] ANDREONI, J., AND J. RAO (2009): “Just Ask: One- and Two-Way Communication in Dictator Games,” Mimeo, University of California, San Diego. [897] AUSTEN-SMITH, D., AND J. BANKS (1996): “Information Aggregation, Rationality, and the Condorcet Jury Theorem,” American Political Science Review, 90, 34–45. [896,901] AUSTEN-SMITH, D., AND T. FEDDERSEN (2005): “Deliberation and Voting Rules,” in Social Choice and Strategic Decisions: Essays in Honor of Jeffrey S. Banks, ed. by D. Austen-Smith and J. Duggan. Berlin: Springer. [893,896,919] (2006): “Deliberation, Preference Uncertainty, and Voting Rules,” American Political Science Review, 100, 209–217. [893,896] BATTAGLINI, M., R. MORTON, AND T. R. PALFREY (2010): “The Swing Voter’s Curse in the Laboratory,” Review of Economic Studies, 77, 61–89. [897] BLINDER, A. S., AND J. MORGAN (2005): “Are Two Heads Better Than One? An Experimental Analysis of Group versus Individual Decision-Making,” Journal of Money, Credit and Banking, 37, 789–811. [914] BOTTOM, W., K. LADHA, AND G. MILLER (2002): “Propagation of Individual Bias Through Group Judgment: Error in the Treatment of Asymmetrically Informative Signals,” Journal of Risk and Uncertainty, 25, 147–163. [897] CAI, H., AND J. WANG (2006): “Overcommunication in Strategic Information Transmission Games,” Games and Economic Behavior, 56, 7–36. [913] CHARNESS, G., AND M. DUFWENBERG (2006): “Promises and Partnership,” Econometrica, 74, 1579–1601. [897] CONDORCET, MARQUIS DE (1785): Essai sur l’application de l’analyse a la probabilite des decisions rendues a la probabilite des voix. Paris: De l’imprimerie royale. Translated in 1976 to “Essay on the Application of Mathematics to the Theory of Decision-Making,” in Condorcet: Selected Writings, ed. by K. M. Baker. Indianapolis, IN: Bobbs–Merrill. [895,896] COOPER, D., AND J. KAGEL (2005): “Are Two Heads Better Than One? Team versus Individual Play in Signaling Games,” American Economic Review, 95, 477–509. [897,914] COUGHLAN, P. (2000): “In Defense of Unanimous Jury Verdicts: Mistrials, Communication, and Strategic Voting,” American Political Science Review, 94, 375–393. [896,897] CRAWFORD, V. P., AND J. SOBEL (1982): “Strategic Information Transmission,” Econometrica, 50, 1431–1451. [913] DEVINE, D. J., L. D. CLAYTON, B. B. DUNFORD, R. SEYING, AND J. PRYCE (2001): “Jury Decision Making: 45 Years of Empirical Research on Deliberating Groups,” Psychology, Public Policy, and Law, 7, 622–727. [894] DICKSON, E., C. HAFER, AND D. LANDA (2008): “Cognition and Strategy: A Deliberation Experiment,” Journal of Politics, 70, 974–989. [897]
EXPERIMENTAL STUDY OF COLLECTIVE DELIBERATION
921
ELSTER, J. (1998): Deliberative Democracy. Cambridge: Cambridge University Press. [896] FEDDERSEN, T. J., AND W. PESENDORFER (1996): “The Swing Voter’s Curse,” American Economic Review, 86, 408–424. [896,897] (1997): “Voting Behavior and Information Aggregation in Elections With Private Information,” Econometrica, 65, 1029–1058. [896] (1998): “Convicting the Innocent: The Inferiority of Unanimous Jury Verdicts Under Strategic Voting,” American Political Science Review, 92, 23–35. [895,896,899,901] FEDDERSEN, T. J., S. GAILMARD, AND A. SANDRONI (2009): “Moral Bias in Large Elections: Theory and Experimental Evidence,” American Political Science Review, 103, 175–192. [902] GERARDI, D., AND L. YARIV (2007): “Deliberative Voting,” Journal of Economic Theory, 134, 317–338. [893,896,919] (2008): “Information Acquisition in Committees,” Games and Economic Behavior, 62, 436–459. [896] GOEREE, J. K., AND L. YARIV (2007): “Conformity in the Lab,” Mimeo, California Institute of Technology. [902] (2011): “Supplement to ‘An Experimental Study of Collective Deliberation’,” Econometrica Supplemental Material, 79, http://www.econometricsociety.org/ecta/Supmat/8852_data and programs.zip; http://www.econometricsociety.org/ecta/Supmat/8852_instructions to experimental subjects.zip. [898] GUARNASCHELLI, S., R. C. MCKELVEY, AND T. R. PALFREY (2000): “An Experimental Study of Jury Decision Rules,” American Political Science Review, 94, 407–423. [897,906] LADHA, K., G. MILLER, AND J. OPPENHEIMER (1999): “Information Aggregation by Majority Rule: Theory and Experiments,” Mimeo, Washington University, St. Louis. [897] LIZZERI, A., AND L. YARIV (2011): “Sequential Deliberation,” Mimeo, New York University and California Institute of Technology. [896,919] MCCUBBINS, M., AND D. B. RODRIGUEZ (2006): “When Does Deliberating Improve Decisionmaking?” Journal of Contemporary Legal Studies, 15, 9–50. [897] MEIROWITZ, A. (2006): “Designing Institutions to Aggregate Preferences and Information,” Quarterly Journal of Political Science, 1, 373–392. [896] MYERSON, R. (1998): “Extended Poisson Games and the Condorcet Jury Theorem,” Games and Economic Behavior, 25, 111–131. [896] PALFREY, T. (2006): “Laboratory Experiments in Political Economy,” in Handbook of Political Economy, ed. by B. Weingast and D. Wittman. Oxford: Oxford University Press. [897] PRIEST, G. L., AND B. KLEIN (1984): “The Selection of Disputes for Litigation,” Journal of Legal Studies, 13, 1–55. [894] SIEGEL, S., AND D. A. GOLDSTEIN (1959): “Decision Making Behavior in a Two-Choice Uncertain Outcome Situation,” Journal of Experimental Psychology, 57, 37–42. [902]
Chair for Organizational Design, University of Zürich, Blümlisalpstrasse 10, CH-8006, Zürich, Switzerland;
[email protected] and Division of the Humanities and Social Sciences, California Institute of Technology, Mail code 228-77, Pasadena, CA 91125, U.S.A.;
[email protected]. Manuscript received September, 2009; final revision received August, 2010.
Econometrica, Vol. 79, No. 3 (May, 2011), 923–947
RAIN AND THE DEMOCRATIC WINDOW OF OPPORTUNITY BY MARKUS BRÜCKNER AND ANTONIO CICCONE1 We show that democratic change may be triggered by transitory economic shocks. Our approach uses within-country variation in rainfall as a source of transitory shocks to sub-Saharan African economies. We find that negative rainfall shocks are followed by significant improvement in democratic institutions. This result is consistent with the economic approach to political transitions, where transitory negative shocks can open a window of opportunity for democratic improvement. Instrumental variables estimates indicate that following a transitory negative income shock of 1 percent, democracy scores improve by 0.9 percentage points and the probability of a democratic transition increases by 1.3 percentage points. KEYWORDS: Democratization, transitory economic shocks.
1. INTRODUCTION WHAT TRIGGERS DEMOCRATIC CHANGE? At least since Lipset (1959), it has been argued that democratic change is often sparked by economic recessions (see also Huntington (1991), Haggard and Kaufman (1995)). We examine the link between recessions and democratic improvements by exploiting withincountry variation in rainfall as a source of transitory shocks to sub-Saharan African economies. Our main finding is that negative rainfall shocks are followed by significant improvements in democratic institutions. There are several theoretical explanations of the link between economic recessions and democratization in the literature (e.g., Lipset (1959), Huntington (1991), Acemoglu and Robinson (2006)). An explanation that fits our framework well is that of Acemoglu and Robinson’s (2001) theory of political transitions. In their theory, negative economic shocks may spark democratic improvement even if shocks are (known to be) exogenous and transitory. This is because transitory negative shocks give rise to a window of opportunity for citizens to contest power, as the cost of fighting ruling autocratic regimes is relatively low. When citizens reject policy changes that are easy to renege upon once the window closes, autocratic regimes must make democratic concessions to avoid costly repression. Hence, democratic improvement is seen as a concession of ruling autocratic regimes when citizens’ opportunity cost of contesting power is temporarily low. Our main measure of democratic institutions is the revised combined Polity IV project score (Marshall and Jaggers (2005)). The Polity score is based on the competitiveness of political participation, the openness and competitiveness of executive recruitment, and constraints on the executive. The 1 We are grateful to Daron Acemoglu, Steven Berry, Masayuki Kudamatsu, and four referees for their helpful comments. Ciccone gratefully acknowledges research support from the Barcelona GSE, CREI, FEDEA-BBVA, and Spanish Ministry of Science Grants SEJ2007-64340 and ECO2008-02779.
© 2011 The Econometric Society
DOI: 10.3982/ECTA8183
924
M. BRÜCKNER AND A. CICCONE
Polity IV project attempts to capture not only outcomes, but also procedural rules. The extent to which this goal is achieved is debated, but even critics of the Polity score argue that it is probably the best of the democracy measures used in the literature (e.g., Glaeser, La Porta, Lopez-de-Silanes, and Shleifer (2004)). The data show some striking instances of democratic improvement following negative rainfall shocks in sub-Saharan Africa. Madagascar transited from autocracy to free democratic elections following a severe drought in 1990. Droughts also preceded free and competitive elections in Mali in 1992 and the multiparty constitution in Mozambique in 1994. Figure 1 shows the evolution of the Polity score for 10 sub-Saharan African countries where democratic improvement was preceded by droughts, defined as rainfall levels below the 20th percentile (a higher Polity score denotes more democratic institutions). Another interesting aspect of the sub-Saharan African data is that there are twice as many democratic transitions following droughts than following rainfall levels above the 80th percentile. Our empirical analysis yields a statistically significant link between negative rainfall shocks and subsequent improvements in the Polity score. This continues to be the case when we consider improvements in the Polity subscores for the competitiveness of political participation, the openness and competitiveness of executive recruitment, and constraints on the executive. We also find that negative rainfall shocks lead to a statistically significant increase in the
FIGURE 1.—Time series plots of Polity change and drought years. The variable on the y-axis is the revised combined Polity IV score; droughts denote years with rainfall below the 20th percentile of the country-specific rainfall distribution.
RAIN AND THE DEMOCRATIC WINDOW OF OPPORTUNITY
925
probability of a democratic transition, defined following Persson and Tabellini (2003), and to a statistically significant increase in the probability of a step toward democracy, defined following Epstein, Bates, Goldstone, Kristensen, and O’Halloran (2006). The democratic improvements experienced by subSaharan African countries following negative rainfall shocks are consistent with Acemoglu and Robinson’s theory of political transitions as negative rainfall shocks lead to transitory drops in gross domestic product (GDP) in our data.2 When citizens’ cost of contesting power is proportional to income, as in Acemoglu and Robinson’s theory of political transitions, we can push the empirical analysis further and estimate the democratic window-of-opportunity effect of transitory, negative income shocks by using rainfall shocks as an instrument. Our instrumental variables estimates indicate that a transitory negative income shock of 1 percent is followed by an improvement in the Polity score of 0.9 percentage points. The executive constraints score improves by 1 percentage point, the political competition score improves by 0.8 percentage points, and the openness and competitiveness of executive recruitment score improves by 0.9 percentage points. When we consider transitions from autocracy to democracy, we find that a transitory negative income shock of 1 percent increases the probability of a democratic transition by 1.3 percentage points (the unconditional probability of a transition to democracy in our sample is 3.6 percent). These estimates reflect the effect of negative transitory income shocks on democratic improvement under the assumption (exclusion restriction) that rainfall shocks affect democratic change only through their effect on income. This condition would not be satisfied if rainfall had a direct effect on the cost of contesting autocratic rule.3 2
A positive effect of rainfall on the GDP of sub-Saharan African countries also was reported by Benson and Clay (1998), Miguel, Satyanath, and Sergenti (2004), and Barrios, Bertinelli, and Strobl (2010). Benson and Clay reported annual time-series evidence for 6 sub-Saharan African countries between 1970 and 1992, and Miguel, Satyanath, and Sergenti reported annual timeseries evidence for 41 sub-Saharan African countries between 1981 and 1999. Our analysis extends the sample further and also differs in that we control for common time effects (shocks affecting all sub-Saharan African countries) and check on the robustness of the rainfall–GDP link. Barrios, Bertinelli, and Strobl examined the effect of rainfall on GDP growth averaged over 5-year periods. 3 There are at least two plausible scenarios where this could be the case. First, road flooding could make it more costly for citizens to coordinate against autocratic regimes. In this case, negative rainfall shocks could lead to democratic improvement because of their direct (negative) effect on the cost of contesting power or because of their (indirect, negative) effect through income. Hence, direct negative effects of rainfall on the cost of contesting power imply that our instrumental variables estimates cannot be interpreted as the effect of transitory income shocks. But the window-of-opportunity theory of political transitions can still be tested by examining whether negative rainfall shocks lead to democratic improvement (this is true as long as the total—direct plus indirect—effect of negative rainfall shocks is a reduction of the cost of contesting autocratic regimes). Second, there is evidence that droughts lead to rural families sending their young men
926
M. BRÜCKNER AND A. CICCONE
If rainfall shocks open a window of opportunity for democratic change because of their effect on income, then rainfall shocks should have a weak effect on democratic change in countries where the effect of rainfall shocks on income is weak because agricultural sectors are small. This is consistent with our finding of a statistically insignificant effect of rainfall shocks on democratic change and on GDP in countries with agricultural GDP shares below the sample median.4 The result that rainfall shocks have an insignificant effect on democratic change in the sample where they have an insignificant effect on income also suggests that rainfall does not have (strong) direct effects on democratic change. Our work fits into the empirical literature on the economic determinants of democratic change; see, for example, Przeworski and Limongi (1997), Barro (1999), Przeworski, Alvarez, Cheibub, and Limongi (2000), and Epstein et al. (2006). This literature has found evidence of a positive link between income and democracy, but recent work by Acemoglu, Johnson, Robinson, and Yared (2008, 2009) indicates that this relationship is absent when one focuses on within-country variation using fixed effects specifications (as we do). Our work differs in that we are interested in democratic change following transitory economic shocks. It is for this reason that we rely on rainfall variation as a source of transitory shocks to the aggregate economy. Haggard and Kaufman (1995), Geddes (1999), Berger and Spoerer (2001), and Acemoglu and Robinson (2006) also document democratic improvements following negative economic shocks. Methodologically, our work is related to Paxson (1992), which appears to be the first paper using rainfall shocks to test theoretical implications of transitory economic shocks.5 to urban areas (see Cekan (1993)), which could reduce the (coordination) cost of contesting power. 4 The average agricultural share in these countries is 18 percent, which is about half the average agricultural share in sub-Saharan Africa. Rainfall has a significantly positive effect on GDP and a significantly negative effect on democratic improvement in countries with agricultural GDP shares above the median. 5 Paxson’s objective is to test the validity of the permanent income hypothesis (see also Fafchamps, Udry, and Czukas (1998)). Miguel, Satyanath, and Sergenti (2004) examine the link between year-to-year rainfall growth, income growth, and civil conflict. Their aim was to reexamine empirical work arguing that civil conflict is caused by low income growth using instrumental variables (for an early contribution to the civil conflict literature, see Collier and Hoeffler (1998)). Burke and Leigh (2010) use a similar approach to estimate the effect of income growth on democratic transitions. Miguel, Satyanath, and Sergenti’s approach cannot be used to test the democratic window-of-opportunity theory. This is because the approach tests whether civil conflict outbreak is more likely following years where rainfall turned out to be low compared to rainfall in previous years. What matters for the window-of-opportunity theory is whether rainfall is low compared to expected future rainfall, not compared to past rainfall. The Supplemental Material Appendix (Brückner and Ciccone (2011)) shows that the effect of year-to-year rainfall growth on democratic improvement in sub-Saharan Africa is statistically insignificant, significantly positive, or significantly negative, depending on the measure of democracy used.
RAIN AND THE DEMOCRATIC WINDOW OF OPPORTUNITY
927
Our work also relates to the political sociology literature that examines the determinants of democratization. Lipset (1959) and Huntington (1991) argued that economic recessions lead to autocratic regimes losing legitimacy, which ends up increasing the probability of democratic change. One explanation for the legitimacy loss following recessions could be that recessions are taken as a sign of government incompetence. The often enormous human costs of government incompetence could motivate altruistic individuals to fight for political change even when they expect the private cost of doing so to be high. The remainder of this paper is organized as follows. Section 2 discusses data and measurement, Section 3 presents the estimation framework and Section 4 presents our results. Section 5 concludes. 2. DATA AND MEASUREMENT Our main measure of democratic institutions is the revised combined Polity score (Polity2) of the Polity IV data base (Marshall and Jaggers (2005)). This variable combines scores for constraints on the chief executive, the competitiveness of political participation, and the openness and competitiveness of executive recruitment. It ranges from −10 to +10, with higher values indicating more democratic institutions. Polity2 is based on the combined Polity score, but is modified for time-series analysis. In particular, changes in the combined Polity score during transition periods are prorated across the span of the transition. Polity IV defines transition periods as periods where new institutions are planned, legally constituted, and put into effect. Democratic and quasidemocratic polities are particularly likely to be preceded by such transition periods (Marshall and Jaggers (2005)). Moreover, Polity2 assigns a score of zero (which Polity IV refers to as neutral) to periods where polities cannot exercise effective authority over at least half of their established territory (Polity IV refers to such periods as interregnum periods). We perform a separate empirical analysis for the Polity IV subscores for constraints on the chief executive, political competition, and the openness and competitiveness of executive recruitment (Polity IV refers to these variables as concept variables). Constraints on the executive denote a measure of the extent of institutionalized constraints on the decision making powers of chief executives and ranges from 1 to 7, with greater values indicating tighter constraints. Political competition measures the extent to which alternative preferences for policy and leadership can be pursued in the political arena. This indicator ranges from 1 to 10, with greater values denoting more competition. Finally, the openness and competitiveness of executive recruitment measures the extent to which the politically active population has an opportunity to attain the position of chief executive through a regularized process and the degree to which prevailing modes of advancement give subordinates equal opportunities to become superordinates. It ranges from 1 to 8, with greater values indicating more open and competitive executive recruitment. We follow the revised
928
M. BRÜCKNER AND A. CICCONE
combined Polity score in prorating changes during a transition period across its span, and we treat interregnum periods as missing values (in contrast to the combined Polity variable, the Polity concept variables do not have a score that Polity IV considers as neutral). To facilitate the comparison of results for Polity2 with those for the Polity concept variables, we present results for a modified version of Polity2 where we drop interregnum periods. We also examine transitions to democracy. Persson and Tabellini (2003, 2006, 2008) and the Polity IV project consider countries to be democracies if their Polity2 score is strictly positive; other Polity2 scores correspond to nondemocracies. To capture transitions to democracy, we define a year t democratic transition indicator variable for country c that is unity if and only if democratic improvements between t and t + 1 lead to the country being upgraded to a democracy; if the country already is a democracy at t, the year t indicator is not defined. Transitions away from democracy are defined analogously. The Polity IV project and Epstein et al. (2006) further separate democracies into partial democracies, with Polity2 scores 1–6, and full democracies, with Polity2 scores 7–10. To analyze the effect of rainfall and income shocks on democratic improvement using this classification, we define a year t democratization step indicator variable for country c that is unity if and only if democratic improvements between t and t + 1 lead to the country being upgraded to a partial or full democracy; if the country already is a full democracy at t, the year t indicator is not defined. We also examine the effect of rainfall shocks on coups d’état in democracies. Polity IV defines coups d’état as a forceful seizure of executive authority and office by a dissident/opposition faction within the country’s ruling or political elites that results in a substantial change in the executive leadership and the policies of the prior regime (although not necessarily in the nature of regime authority or mode of governance). We define a coup d’état in democracy indicator variable for year t and country c that is unity if the country is a democracy and there has been a coup, and that is zero if the country is a democracy and there has not been a coup. Our measures of political change are summarized in Table I. The country–year rainfall estimates come from the National Aeronautics and Space Administration (NASA) Global Precipitation Climatology Project (GPCP). NASA GPCP rainfall estimates are based on data from gauge stations, and microwave, infrared, and sounder data from satellites. Specifically, the NASA GPCP combines special sensor microwave imager emission and scattering algorithms, a geostationary orbital environmental satellite precipitation index, an outgoing long wave precipitation index, information from Tiros operational vertical sounders and National Oceanic and Atmospheric Administration polar orbiting satellites, and measurements from gauge stations to obtain monthly rainfall estimates on a 2.5◦ × 25◦ latitude–longitude grid. A detailed explanation of how gauge measurements are merged with satellite data
RAIN AND THE DEMOCRATIC WINDOW OF OPPORTUNITY
929
TABLE I MEASURES OF POLITICAL CHANGEa Variable
Description
Polity2
The t to t + 1 change in the revised combined Polity score. The maximum range of this variable is from −20 to 20. Positive (negative) values indicate an improvement (deterioration) in democracy. We also analyze the effect on Polity scores after excluding interregnum periods.
Exrec
The t to t + 1 change in the executive recruitment concept (Polity IV) score. The maximum range of this variable is from −7 to 7. Positive (negative) values indicate an improvement (deterioration) in the executive recruitment concept.
Polcomp
The t to t + 1 change in the political competition concept (Polity IV) score. The maximum range of this variable is from −9 to 9. Positive (negative) values indicate an improvement (deterioration) in the political competition concept.
Exconst
The t to t + 1 change in the executive constraint concept (Polity IV) score. The maximum range of this variable is from −5 to 5. Positive (negative) values indicate an improvement (deterioration) in the executive constraint concept.
Democratic transition
Indicator variable that is equal to unity in year t if and only if the country is a democracy in t + 1 but a nondemocracy in t (the year t indicator is not defined if the country is a democracy in t).
Democratization step
Indicator variable that is equal to unity in year t if and only if the country is upgraded to either a partial or full democracy between t and t + 1 (the year t indicator is not defined if the country is a full democracy in t).
Autocratic transition
Indicator variable that is equal to unity in year t if and only if the country is a nondemocracy in t + 1 but a democracy in t (the year t indicator is not defined if the country is a nondemocracy in t).
Coup in democracy
Indicator variable that is unity if and only if in period t there was a coup d’état in countries that have strictly positive Polity2 scores (democracies).
a Source: Polity IV data base (Marshall and Jaggers (2005)).
is provided in Adler et al. (2003).6 In comparison to rainfall estimates based exclusively on gauge measurements, there are two main advantages of the GPCP estimates. First, the GPCP rainfall estimates are less likely to suffer from classical measurement error due to the sparseness of operating gauge stations in sub-Saharan African countries (especially after 1990).7 Moreover, the num6 The data are available at http://precip.gsfc.nasa.gov. For a validation study of the GPCP satellite-based rainfall data, see Nicholson et al. (2003). 7 Matsuura and Willmott (2007) provided gauge-based rainfall estimates for a large part of the world and a long time period. The spatial gauge density underlying their rainfall estimates
930
M. BRÜCKNER AND A. CICCONE TABLE II DESCRIPTIVE STATISTICSa Variable
Polity2 Exrec Polcomp Exconst Democratic transition indicator Democratization step indicator Autocratic transition indicator Coup in democracy indicator Real per capita GDP Rainfall (mm per year)
Mean
Std. Dev.
Observations
0.249 0.083 0.183 0.071 0.036 0.035 0.055 0.106 1585.14 980.39
2.097 0.763 1.007 0.700 0.186 0.183 0.238 0.308 1732.38 501.41
955 902 902 902 700 867 255 255 955 955
a See Table I for detailed definitions of the measures of political change.
ber of operating gauge stations in a country may be affected by socioeconomic conditions, which could lead to nonclassical measurement error in rainfall estimates. Such errors are less of a concern for GPCP rainfall estimates than rainfall estimates based exclusively on gauge measurements.8 GPCP rainfall estimates are available from 1979 onward. Our measure of per capita income is real per capita GDP from the Penn World Tables 6.2 (Heston, Summers, and Aten (2006)), which are available up to 2004. Table II contains summary statistics for key data. 3. ESTIMATION FRAMEWORK To estimate the effect of country-specific rainfall shocks on income, we relate log income per capita in country c at time t (log yct ) to a countryspecific fixed effect plus time trend (αc + βc t), time-varying shocks that affect all sub-Saharan African countries (φt ), and country-specific rainfall levels (log Rainct ), (1)
log yct = αc + βc t + φt + γ log Rainct + θ log Rainct−1 + vct
for sub-Saharan African countries appears to be relatively good for the 1960s and 1970s, but declines thereafter. For example, while the average number of gauge stations per country was 40 in the 1960s, the average drops to 32 in the 1980s, 18 in the 1990s, and 8 after 2000. As a result, gauge coverage after 1990 appears to be unsatisfactory according to the criteria of the World Meteorological Organization (1985) and Rudolf, Hauschild, Rüth, and Schneider (1994). 8 For example, a regression of the Matsuura and Willmott rainfall estimates on lagged per capita GDP, country-specific fixed effects plus time trends, and common time effects yields a statistically significant, negative effect of lagged income on rainfall for the 1980–2004 period we focus on (lagged per capita GDP also has a significant effect on the number of reporting gauges in the Matsuura and Willmott data set). By contrast, lagged GDP has no significant effect on GPCP rainfall.
RAIN AND THE DEMOCRATIC WINDOW OF OPPORTUNITY
931
where v is a disturbance term. The parameter γ captures the contemporaneous effect of country-specific rainfall shocks on income, while θ captures the lagged effect. The inclusion of lagged effects allows us to examine how quickly the effect of rainfall peters out. To examine the effect of rainfall shocks on democratic change, we maintain the right-hand-side explanatory variables of (1) but use measures of democratic change on the left-hand side. Our main measure of democratic change is the change in the Polity2 score between t and t + 1, Dct = Dct+1 − Dct , where Dct refers to the year t Polity2 score of country c. In this case, the estimating equation becomes (2)
Dct = ac + bc t + ft + c log Rainct + d log Rainct−1 + ect
where e is a disturbance term. We use the same estimating equation to examine the effect of rainfall shocks on the change in each of the three Polity concept variables and on the indicator variables for transition to democracy and step toward democracy.9 Moreover, (2) is the basis for our analysis of the effect of rainfall shocks on transitions away from democracy and coups d’état in democracies. Under the assumption that rainfall shocks affect democratic change only through income, we can estimate the effect of transitory income shocks on democratic institutions using an instrumental variables approach. Our analysis of the effect of income shocks on democratic change uses two specifications. The first controls for log income, country-specific fixed effects plus time trends, and common time effects, while the second specification replaces log income by a country-specific recession indicator. This indicator is unity if and only if income in a country falls below its trend for reasons other than shocks affecting all sub-Saharan African countries. Specifically, we first estimate (3)
log yct = αc + βc t + φt + ηct
where η is a disturbance term, using least squares. Then we define a countryspecific recession indicator that is unity if log yct is below the predicted value αˆ c + βˆ c t + φˆ t and is zero otherwise. 9
We use linear specifications because probit and (unconditional) logit with fixed effects yield inconsistent slope estimates due to the incidental parameter problem (Greene (2003)). Consistent slope estimates can be obtained using conditional fixed effects logit, which yields qualitatively and statistically the same results as the corresponding linear probability model (the magnitude of estimates cannot be compared without knowing the distribution of fixed effects; see Wooldridge (2002)). The main drawback of conditional fixed effects logit is that estimates do not converge when we include country-specific time trends and common time effects (this is a general problem associated with maximum likelihood estimation of many coefficients in nonlinear models; see, for instance, Greene (2004)).
932
M. BRÜCKNER AND A. CICCONE
4. EMPIRICAL RESULTS Table III, column 1 shows our estimates of the effect of rainfall shocks on the change in the Polity2 score using equation (2). We report least squares estimates and Huber robust standard errors clustered at the country level (in parentheses). All our results refer to the 1980–2004 period.10 The estimates indicate that negative rainfall shocks at t − 1 are followed by statistically significant democratic improvement. In particular, 10 percent lower rainfall levels lead to an improvement of 0.146 points in the Polity2 score, and the effect is statistically significant at the 95 percent confidence level. Given the [−10, 10] range of Polity2, a 0.146 point increase corresponds to an improvement of 0.73 percentage points. Table III, column 2 estimates the same specification as column 1 but codes interregnum years as missing observations (which is why the number of observations drops to 902) to make the results more readily comparable with our analysis for Polity subscores in columns 3–5. This yields an effect of t − 1 rainfall shocks that is stronger both quantitatively and statistically than in column 1. Table III, columns 3–5 estimate the effect of rainfall shocks on the change in the Polity subscores for constraints on the executive, political competition, and the openness and competitiveness of executive recruitment. The results show that negative t − 1 rainfall shocks lead to significant democratic improvement TABLE III RAINFALL AND POLITY CHANGEa Polity2
Log rainfall, t Log rainfall, t − 1 Country fixed effect Country time trend Common time effect Observations
(1)
(2)
Exconst (3)
Polcomp (4)
Exrec (5)
0.261 (0.347)
0.031 (0.381)
0.093 (0.111)
−0.153 (0.152)
0.091 (0.171)
−1.461** (0.723)
−1.660** (0.740)
−0.459* (0.256)
−0.578** (0.286)
−0.485** (0.244)
Yes Yes Yes 955
Yes Yes Yes 902
Yes Yes Yes 902
Yes Yes Yes 902
Yes Yes Yes 902
a The method of estimation is least squares; Huber robust standard errors (in parentheses) are clustered at the country level. The dependent variable in columns 1 and 2 is the t to t + 1 change in the revised combined Polity score (Polity2); column 2 excludes observations that correspond to interregnum periods. The dependent variable in columns 3–5 is the t to t + 1 change in Polity IV subscores that reflect changes in a country’s constraints on the executive (Exconst), political competition (Polcomp), and executive recruitment (Exrec). The range of the dependent variables is as follows: Polity2 [−10, 10], Exconst [1, 7], Polcomp [1, 10], and Exrec [1, 8]. *Significantly different from zero at 90 percent confidence; **95 percent confidence; ***99 percent confidence.
10 The first Polity2 observation used corresponds to 1980, but the first rainfall observation corresponds to 1979 (the starting date of the rainfall data), as our specifications include rainfall levels at t and t − 1.
RAIN AND THE DEMOCRATIC WINDOW OF OPPORTUNITY
933
in all three dimensions. Ten percent lower rainfall levels result in an increase of 0.046 points in the executive constraints score, and the effect is statistically significant at the 90 percent confidence level. As this score has a [1, 7] range, a 0.046 point increase amounts to a tightening of executive constraints by 0.77 percentage points. The political competition and executive recruitment scores increase by 0.058 and 0.049 points, respectively, and both effects are statistically significant at the 95 percent confidence level. These changes amount to improvements of 0.64 and 0.69 percentage points, respectively, as political competition has a [1, 10] range and executive recruitment has a [1, 8] range. Table IV contains our estimates of the effect of rainfall on GDP per capita and the probability of a country-specific recession. Column 1 estimates the effect of contemporaneous rainfall shocks on GDP per capita using equation (1). Our results indicate that 10 percent lower rainfall levels lead to a 0.79 percent drop in income per capita, and that the effect is statistically significant at the 99 percent confidence level. Columns 2 and 3 augment the specification in column 1 by lagged rainfall levels.11 Column 2 shows that rainfall at t − 1 has a statistically insignificant effect on GDP at t. Column 3 includes rainfall at t − 2 as an additional control and finds that the effect is also statistically insignificant. Hence, the main effect of rainfall shocks on income per capita is contemporaneous. Combined with our finding in Table III, where rainfall shocks took 1 year to translate into political change, this suggests that political change follows income shocks with a 1 year lag. Acemoglu and Robinson’s (2001) theory of political transitions would have predicted a contemporaneous impact, but the discrepancy seems small given the difficulties in dating political changes precisely. In Table IV, column 4, we check whether the contemporaneous effect of rainfall shocks depends on countries’ Polity2 score, but find the interaction effect to be statistically insignificant. Table IV, columns 5–8 consider the effect of rainfall shocks on the countryspecific recession indicator. In column 5, we find that 10 percent lower rainfall levels raise the probability of a recession by 3.9 percentage points, and that the effect is statistically significant at the 99 percent confidence level. Columns 6 and 7 show that the effect of lagged rainfall levels is statistically insignificant, and column 8 shows that the contemporaneous effect of rainfall shocks does not vary significantly with countries’ Polity2 score. 11 The Supplemental Material Appendix contains a series of robustness checks. In particular, we reestimate the effect of rainfall on income using rainfall levels rather than log levels, examine the relationship in first differences rather than levels, control for temperature, check for nonlinearities, drop the top 1 percent rainfall observations, account for potential spatial correlation of rainfall, and use a variety of different approaches to calculate standard errors. We also use the Matsuura and Willmott (2007) rainfall data and find a statistically significant effect of rainfall shocks on income for (pre-1990) periods where spatial gauge density is relatively good; see footnote 7. The Matsuura and Willmott rainfall estimates do not yield a significant effect of rainfall on income for the 1980–2004 period we focus on, however. We think that this is most likely due to the unsatisfactory gauge density in the second half of this period.
934
TABLE IV RAINFALL, PER CAPITA GDP, AND COUNTRY-SPECIFIC RECESSIONSa Log GDP (1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
0.079*** (0.029)
0.075*** (0.026)
0.076*** (0.027)
0.082*** (0.030)
−0.399*** (0.140)
−0.382*** (0.127)
−0.383*** (0.130)
−0.376** (0.154)
0.048 (0.032)
0.046 (0.029)
−0.191 (0.139)
−0.189 (0.125)
Log rainfall, t − 1 Log rainfall, t − 2 Log rainfall, t*Polity2, t Polity2, t Country fixed effect Country time trend Common time effect Observations
−0.018 (0.147)
0.010 (0.035)
Yes Yes Yes 955
Yes Yes Yes 955
Yes Yes Yes 955
0.001 (0.003)
0.005 (0.013)
−0.002 (0.021)
−0.048 (0.091)
Yes Yes Yes 955
Yes Yes Yes 955
Yes Yes Yes 955
Yes Yes Yes 955
Yes Yes Yes 955
a The method of estimation is least squares; Huber robust standard errors (in parentheses) are clustered at the country level. The dependent variable in columns 1–4 is log real per capita GDP (PWT 6.2). The dependent variable in columns 5–8 is an indicator variable (Country-Specific Recession) that is unity if and only if per capita GDP falls below the country-specific time trend for reasons other than shocks affecting all sub-Saharan countries (see equation (3)). *Significantly different from zero at 90 percent confidence; **95 percent confidence; ***99 percent confidence.
M. BRÜCKNER AND A. CICCONE
Log rainfall, t
Country Specific Recession
RAIN AND THE DEMOCRATIC WINDOW OF OPPORTUNITY
935
FIGURE 2.—(A) Rainfall and per capita GDP. (B) Rainfall and Polity change. Nonparametric local polynomial estimates are computed using an Epanechnikov kernel. The bandwidth in (A) is 0.1 and in (B) 0.25 as suggested by cross-validation criteria. Dashed lines indicate 95 percent confidence bands.
To check whether our (linear) specifications miss important aspects of the data, we reestimate the effect of rainfall shocks on per capita GDP and the change in Polity2 using nonparametric local polynomial estimators. Figure 2(A) presents nonparametric local polynomial estimates of the effect of rainfall on GDP.12 We use an Epanechnikov kernel and select the bandwidth as suggested by cross-validation criteria.13 It turns out that the relationship is monotonically increasing except for large positive rainfall shocks, where the relationship is estimated to be hump-shaped.14 The hump is very imprecisely estimated however, because less than 1 percent of rainfall observations are to the right of its peak.15 (Reestimating equations (1) and (2) after dropping the top 1 percent of rainfall observations yields results that are slightly stronger statistically; see the Supplemental Material Appendix.) Figure 2(B) uses the same approach to obtain nonparametric local polynomial estimates of the effect of rainfall shocks on the change in the Polity2 score. This relationship is monotonically decreasing over the whole range. 12 Estimation proceeds in two steps. In the first step, we regress log income per capita and log rainfall on country-specific fixed effects plus time trends and common time effects. Then we take the residuals from these two regressions and use the nonparametric local polynomial estimator to examine the relationship between rainfall and per capita income. 13 See Bowman and Azzalini (1997). Intuitively, cross-validation amounts to choosing the bandwidth to minimize the mean-square error. 14 We also present nonparametric local polynomial estimates using half and twice the bandwidth recommended by cross-validation in the Supplemental Material Appendix. 15 The Supplemental Material Appendix tests for nonlinearities by including dummy variables for rainfall levels above or below certain percentiles. These dummy variables turn out to have small and statistically insignificant effects, while the linear effect remains statistically significant.
936
M. BRÜCKNER AND A. CICCONE TABLE V INCOME SHOCKS AND POLITY CHANGEa Polity2
(1) 2SLS
Log GDP, t − 1 Country fixed effect Country time trend Common time effect Observations Log rainfall, t − 1 Country fixed effect Country time trend Common time effect Observations
(2) 2SLS
(3) LS
(4) LS
Exconst (5) 2SLS
−18.021** −21.410** −0.045 −0.836 −5.809* [0.049] [0.026] (0.348) (0.564) [0.073] Yes Yes Yes 955 0.079*** (0.029) Yes Yes Yes 955
Yes Yes Yes 902
Yes Yes Yes 3191
Yes Yes Yes 955
Yes Yes Yes 902
Polcomp (6) 2SLS
Exrec (7) 2SLS
−7.680** −6.137* [0.037] [0.054] Yes Yes Yes 902
Yes Yes Yes 902
First Stage for Log GDP per capita, t − 1 0.077*** 0.077*** 0.077*** 0.077*** (0.029) (0.029) (0.029) (0.029) Yes Yes Yes 902
Yes Yes Yes 3191
Yes Yes Yes 955
Yes Yes Yes 902
Yes Yes Yes 902
Yes Yes Yes 902
a The method of estimation for the first-stage regressions in the bottom panel is least squares; below the least squares estimates, we report Huber robust standard errors (in parentheses) that are clustered at the country level. The method of estimation used in the top panel is two-stage least squares in columns 1 and 2 and 5–7; below the twostage least squares estimates, we report p-values [in square brackets] based on the Anderson–Rubin test of statistical significance. A key property of this test is that it is robust to weak instruments; 2SLS standard errors are not robust to weak instruments, and inference based on 2SLS standard errors can be very misleading as a result. See Andrews and Stock (2005) for a review of these issues. We implement a version of the Anderson–Rubin test that is robust to heteroskedasticity and arbitrary within-country correlation of the residuals. For comparison with the two-stage least squares estimates, the top panel also reports least squares estimates for the world sample (in column 3) and the subSaharan African sample (in column 4) with standard errors that are robust to heteroskedasticity and arbitrary withincountry correlation below the estimates. The dependent variable in the top panel, columns 1–4 is the t to t + 1 change in the revised combined Polity score (Polity2); column 2 excludes observations that correspond to interregnum periods. The dependent variable in the top panel, columns 5–7 is the t to t +1 change in Polity IV subscores of constraints on the executive (Exconst), political competition (Polcomp), and executive recruitment (Exrec). The range of the dependent variables is as follows: Polity2 [−10, 10], Exconst [1, 7], Polcomp [1, 10], and Exrec [1, 8]. The dependent variable in the bottom panel is the log of real per capita GDP. *Significantly different from zero at 90 percent confidence; **95 percent confidence; ***99 percent confidence.
Table V presents two-stage least squares (2SLS) estimates of the effect of transitory income shocks on the change in the Polity2 score. These estimates assume that the effect of t − 1 rainfall shocks on democratic change documented in Table III is through income.16 The top panel of Table V contains 16
In the Supplemental Material Appendix, we examine whether the effect of rainfall shocks on democratic change could be through government expenditures, military expenditures, or consumer prices (rather than GDP per capita). Our analysis does not yield a statistically significant effect of rainfall shocks on these variables. In the case of military expenditures, this could be because limited data force us to work with a quite reduced subsample (interestingly, however, we do find a statistically significant effect of rainfall on GDP per capita and democratic change in this subsample).
RAIN AND THE DEMOCRATIC WINDOW OF OPPORTUNITY
937
estimates of the effect of log income per capita on democratic change, while the bottom panel presents first-stage effects (when applicable). The result in column 1 indicates that a transitory 1 percent negative income shock at t − 1 leads to an improvement in the Polity2 score of 0.18 points.17 This effect is statistically significant at the 95 percent confidence level and amounts to an increase of 0.9 percentage points given the [−10, 10] range of the score.18 In column 2 we drop interregnum periods. The effect continues to be statistically significant at the 95 percent confidence level and is somewhat larger in absolute value than in column 1.19 For comparison, we show the results using least squares for the world sample (the largest possible sample for 1980–2004) and sub-Saharan Africa in columns 3 and 4, respectively. The least squares estimates have the same sign as the 2SLS estimates, but are much smaller in absolute value and statistically insignificant. For example, in the world sample, a negative income shock of 1 percent leads to an improvement in Polity2 scores of less than 0.01 of a percentage point. For sub-Saharan Africa, the effect is less than 0.05 of a percentage point.20 Our finding that 2SLS estimation yields a stronger negative effect of income shocks on democratic improvements than least squares estimation is most likely explained by the combination of three factors.21 First, the window-of-opportunity theory of political transitions stresses transitory economic shocks; permanent shocks change the balance of power permanently and will therefore allow citizens to demand and obtain policy concessions in the future even in the absence of democratic reforms. When we instrument income shocks using rainfall shocks, we isolate transitory income shocks. Hence, the stronger negative effect obtained using 2SLS in column 1 compared to using least squares in column 4 is consistent with theory. Second, the income estimates in the Penn World Tables contain a substantial amount of noise, especially for sub-Saharan African countries (e.g., Heston (1994), Deaton (2005)). Classical measurement error would affect our least squares estimate in column 4, but not our instrumental variables estimate in column 1 as long as 17
In Table V, the p-values in square brackets below 2SLS estimates are based on the Anderson–Rubin test of statistical significance. A key property of this test is robustness to weak instruments. 2SLS standard errors, on the other hand, are not robust to weak instruments, and inference based on 2SLS standard errors can be very misleading as a result. See Andrews and Stock (2005) for a review of these issues. The power properties of the Anderson–Rubin test are also good (it is a uniformly most powerful unbiased test under certain conditions). We implement a version of the Anderson–Rubin test that is robust to heteroskedasticity and arbitrary withincountry correlation of the residuals. 18 In the Supplemental Material Appendix, we show that the effect of year t income shocks is statistically insignificant. 19 In the Supplemental Material Appendix, we show that results are similar when we measure democratic institutions using the Freedom House (2007) political rights indicator. 20 A formal test yields that there is no statistically significant difference between the results for the world sample and for sub-Saharan Africa. 21 In Table V, a Hausman test rejects the equality of the least squares estimate in column 4 and the 2SLS estimate in column 1 at the 90 percent confidence level.
938
M. BRÜCKNER AND A. CICCONE
noise in income estimates is uncorrelated with noise in rainfall estimates. Classical measurement error could therefore lead to the least squares estimate in column 4 being attenuated relative to the instrumental variables estimate in column 1. A third reason why the least squares estimate is larger than the instrumental variables estimate could be that democratic reforms are partly anticipated, and that this leads to increases in income before reforms are actually in place. This would bias the least squares estimate upward but leave the instrumental variables estimate unaffected. Table VI uses the country-specific recession indicator to examine democratic change following recessions. The top panel presents our estimates of the effect of recessions on democratic change, while the bottom panel presents first-stage TABLE VI COUNTRY-SPECIFIC RECESSIONS AND POLITY CHANGEa Polity2
Country-specific recession, t − 1 Country fixed effect Country time trend Common time effect Observations Log rainfall, t − 1 Country fixed effect Country time trend Common time effect Observations
(1) 2SLS
(2) 2SLS
3.584** [0.049]
4.166** [0.026]
Yes Yes Yes 955
Yes Yes Yes 902
(3) LS
(4) LS
Exconst (5) 2SLS
−0.085 0.199* 1.130* (0.059) (0.115) [0.073] Yes Yes Yes 3191
Yes Yes Yes 955
Yes Yes Yes 902
Polcomp (6) 2SLS
Exrec (7) 2SLS
1.494** [0.037]
1.194* [0.054]
Yes Yes Yes 902
Yes Yes Yes 902
First Stage for Country-Specific Recession, t − 1 −0.399*** −0.398*** −0.398*** −0.398*** −0.398*** (0.140) (0.141) (0.141) (0.141) (0.141) Yes Yes Yes 955
Yes Yes Yes 902
Yes Yes Yes 3191
Yes Yes Yes 955
Yes Yes Yes 902
Yes Yes Yes 902
Yes Yes Yes 902
a The method of estimation for the first-stage regressions in the bottom panel is least squares; below the least squares estimates, we report Huber robust standard errors (in parentheses) that are clustered at the country level. The method of estimation used in the top panel is two-stage least squares in columns 1 and 2 and 5–7; below the twostage least squares estimates, we report p-values [in square brackets] based on the Anderson–Rubin test of statistical significance. A key property of this test is that it is robust to weak instruments; 2SLS standard errors are not robust to weak instruments, and inference based on 2SLS standard errors can be very misleading as a result. See Andrews and Stock (2005) for a review of these issues. We implement a version of the Anderson–Rubin test that is robust to heteroskedasticity and arbitrary within-country correlation of the residuals. For comparison with the two-stage least squares estimates, the top panel also reports least squares estimates for the world sample (in column 3) and the subSaharan African sample (in column 4) with standard errors that are robust to heteroskedasticity and arbitrary withincountry correlation below the estimates. The dependent variable in the top panel, columns 1–4 is the t to t + 1 change in the revised combined Polity score (Polity2); column 2 excludes observations that correspond to interregnum periods. The dependent variable in the top panel, columns 5–7 is the t to t + 1 change in Polity IV subscores of constraints on the executive (Exconst), political competition (Polcomp), and executive recruitment (Exrec). The range of the dependent variables is as follows: Polity2 [−10, 10], Exconst [1, 7], Polcomp [1, 10], and Exrec [1, 8]. The dependent variable in the bottom panel is a country specific recession indicator that is unity if and only if per capita GDP falls below the country-specific time trend for reasons other than shocks affecting all sub-Saharan countries (see equation (3)). *Significantly different from zero at 90 percent confidence; **95 percent confidence; ***99 percent confidence.
RAIN AND THE DEMOCRATIC WINDOW OF OPPORTUNITY
939
effects (when applicable). Columns 1 and 2 measure democratic change using the Polity2 score. The 2SLS estimates in column 1 imply that recessions increase the Polity2 score by 18 percentage points and that the effect is statistically significant at the 95 percent confidence level. The effect is somewhat stronger statistically and quantitatively when we exclude interregnum periods in column 2. Columns 3 and 4 show that least squares estimates of the effect of recessions on Polity2 are much smaller than 2SLS estimates, whether we consider the world sample in column 3 or sub-Saharan Africa in column 4. Columns 5–7 indicate that recessions also lead to statistically significant improvements in the Polity subscores. Our 2SLS estimates imply that the score for executive constraints improves by 19 percentage points, while the scores for political competition and for the openness and competitiveness of executive recruitment both improve by 17 percentage points. Table VII augments our baseline estimating equations by including the lagged Polity2 score as an additional control. Columns 1 and 2 use the augmented specifications to reexamine the effect of rainfall shocks on the change in the Polity2 score. Column 1 contains least squares results, while column 2 contains system–generalized method of moment (GMM) estimates (Blundell and Bond (1998)). Both show an effect of t − 1 rainfall shocks that is very similar to our baseline result in column 1 of Table III. Columns 3 and 4 of Table VII contain 2SLS estimates of the effect of income shocks on the change in the Polity2 score, and columns 5–8 add further Polity2 lags on the right-hand side of the estimating equation. Results are again very similar to our baseline estimates.22 Table VIII, column 1 shows the effect of rainfall shocks on the probability of democratization using the Persson and Tabellini (2003, 2006, 2008) and Polity IV project definition of democracy. Our results indicate that negative t − 1 rainfall shocks lead to an increase in the probability of a transition to democracy between t and t + 1, and that the effect is statistically significant at the 95 percent confidence level. The point estimate implies that 10 percent lower rainfall levels increase the probability of a democratic transition by 1.25 percentage points.23 Column 2 repeats the analysis using the democratization step indicator based on the Epstein et al. (2006) and Polity IV trichotomous classification of polities. This yields that 10 percent lower rainfall levels raise the probability of a step toward democracy by 1.4 percentage points and that the effect is statistically significant at the 95 percent confidence level. Columns 3 and 4 of Table VIII estimate the effect of rainfall shocks on the probability of transitions away from democracy (autocratic transitions) and 22 In the Supplemental Material Appendix, we show that results are very similar when we put the Polity2 level (instead of the Polity2 change) on the left-hand side of these estimating equations. 23 In an earlier working paper version (see Brückner and Ciccone (2008)), we showed that negative rainfall shocks also have a significantly positive effect on the probability of a transition to democracy when using the Przeworski et al. (2000) democracy indicator.
940
TABLE VII INCOME SHOCKS, POLITY CHANGE, AND DEMOCRATIC CONVERGENCEa Polity2
Polity2, t
(1) LS
(2) SYS-GMM
−0.294*** (0.023)
−0.293*** (0.023)
(3) 2SLS
−0.282*** (0.043)
(5) LS
(6) SYS-GMM
−0.286*** (0.036)
−0.174*** (0.034)
−0.175*** (0.034)
−0.199*** (0.040)
−0.215*** (0.043)
−0.171*** (0.025)
−0.171*** (0.025)
−0.120** (0.052)
−0.102* (0.055)
Polity2, t − 1 Log rainfall, t Log rainfall, t − 1
0.213 (0.317)
0.227 (0.324)
0.169 (0.296)
0.142 (0.318)
−1.404** (0.690)
−1.562** (0.692)
−1.403** (0.661)
−1.581** (0.690)
Log GDP, t − 1
−17.360** [0.046]
Country-specific recession, t − 1 Country fixed effect Country time trend Common time effect Observations
(7) 2SLS
−17.416** [0.036] 3.450** [0.046]
Yes Yes Yes 955
Yes Yes Yes 955
Yes Yes Yes 955
(8) 2SLS
Yes Yes Yes 955
3.460** [0.036] Yes Yes Yes 955
Yes Yes Yes 955
Yes Yes Yes 955
Yes Yes Yes 955
a The method of estimation in columns 1 and 5 is least squares, in columns 2 and 6 is system–GMM, and in columns 3, 4, 7, and 8 is two-stage least squares; below the least squares estimates, we report Huber robust standard errors (in parentheses) that are clustered at the country level; below the two-stage least squares estimates, we report p-values [in square brackets] based on the Anderson–Rubin test of statistical significance. A key property of this test is that it is robust to weak instruments; 2SLS standard errors are not robust to weak instruments, and inference based on 2SLS standard errors can be very misleading as a result. See Andrews and Stock (2005) for a review of these issues. We implement a version of the Anderson–Rubin test that is robust to heteroskedasticity and arbitrary within-country correlation of the residuals. The dependent variable is the t to t + 1 change in the revised combined Polity score (Polity2). The instrumental variable in columns 3 and 4 and 7 and 8 is rainfall. Country-specific recession is an indicator variable that takes on the value of unity if and only if per capita GDP falls below the country-specific time trend for reasons other than shocks affecting all sub-Saharan countries (see equation (3)). *Significantly different from zero at 90 percent confidence; **95 percent confidence; ***99 percent confidence.
M. BRÜCKNER AND A. CICCONE
(4) 2SLS
941
RAIN AND THE DEMOCRATIC WINDOW OF OPPORTUNITY TABLE VIII RAINFALL AND POLITY TRANSITIONSa
Democratic Transition Democratization Step Autocratic Transition Coup in Democracy (1) (2) (3) (4)
Log rainfall, t Log rainfall, t − 1 Country fixed effect Country time trend Common time effect Observations
0.027 (0.034)
0.016 (0.027)
−0.021 (0.048)
−0.005 (0.089)
−0.125** (0.057)
−0.140** (0.064)
0.169 (0.113)
−0.003 (0.115)
Yes Yes Yes 700
Yes Yes Yes 867
Yes Yes Yes 255
Yes Yes Yes 255
a The method of estimation is least squares; Huber robust standard errors (in parentheses) are clustered at the country level. The dependent variable in column 1 is a democratic transition indicator that is equal to unity in year t if and only if the country is a democracy in t + 1 but a nondemocracy in t (the year t indicator is not defined if the country is a democracy in t ). The dependent variable in column 2 is a democratization step indicator that is equal to unity in year t if and only if the country is upgraded to either a partial or full democracy between t and t + 1 (the year t indicator is not defined if the country is a full democracy in t ). The dependent variable in column 3 is an autocratic transition indicator that is equal to unity in year t if and only if the country is a nondemocracy in t + 1 but a democracy in t (the year t indicator is not defined if the country is a nondemocracy in t ). The dependent variable in column 4 is the incidence of a coup in African countries that were democracies. Coup data are taken from Polity IV, where a coup is defined as a forceful seizure of executive authority and office by a dissident/opposition faction within the country’s ruling or political elites that results in a substantial change in the executive leadership and the policies of the prior regime. For further detail on the coding of the dependent variables, see the main text. *Significantly different from zero at 90 percent confidence; **95 percent confidence; ***99 percent confidence.
coups d’état in democracies. The estimates in column 3 indicate that autocratic transitions are more likely following positive t − 1 rainfall shocks. The effect of rainfall shocks is actually larger in absolute value than for democratic transitions in column 1, but very imprecisely estimated and therefore statistically insignificant. For coups d’état in democracies, the effect of rainfall shocks is small and statistically insignificant.24 Table IX, columns 1–3 summarize our findings on the effect of income shocks on transitions to democracy. The least squares effect of income shocks on democratic transitions is very small and statistically insignificant. The effect also turns out to have the wrong sign from the point of view of the democratic window-of-opportunity theory (it implies that negative income shocks decrease the probability of a democratic transition). But the 2SLS estimate in column 2 indicates that negative income shocks lead to an increase in the probability of a democratic transition and that the effect is statistically significant at the 95 percent confidence level. The point estimate implies that a transitory negative 24 The sample of autocratic transitions and coups d’état in democracies is much smaller than the sample of democratic transitions. It is also interesting to note that Acemoglu and Robinson’s (2001) theory of political transitions is consistent with negative economic shocks leading to democratic transitions but not to democratic reversals.
942
M. BRÜCKNER AND A. CICCONE TABLE IX INCOME SHOCKS AND TRANSITIONS TO DEMOCRACYa Democratic Transition (1) LS
Log GDP, t − 1
0.056 (0.058)
(2) 2SLS
(3) 2SLS
−1.285** [0.027]
Country specific recession, t − 1 Country fixed effect Country time trend Common time effect Observations
Democratization Step (4) LS
−0.053 (0.051)
(5) 2SLS
−1.471** [0.029]
0.235** [0.027] Yes Yes Yes 700
Yes Yes Yes 700
Yes Yes Yes 700
0.279** [0.029] Yes Yes Yes 867
Yes Yes Yes 867
First Stage for Log GDP per capita/Country Specific Recession, t − 1 Log rainfall, t − 1 0.095*** −0.519*** 0.094*** (0.037) (0.164) (0.032) Country fixed effect Country time trend Common time effect Observations
Yes Yes Yes 700
Yes Yes Yes 700
Yes Yes Yes 700
(6) 2SLS
Yes Yes Yes 867
Yes Yes Yes 867
Yes Yes Yes 867 −0.494*** (0.151) Yes Yes Yes 867
a The method of estimation in columns 1 and 4 is least squares and columns 2, 3, 5, and 6 is two-stage least squares; below the least squares estimates, we report Huber robust standard errors (in parentheses) that are clustered at the country level; below the two-stage least squares estimates, we report p-values [in square brackets] based on the Anderson–Rubin test of statistical significance. A key property of this test is that it is robust to weak instruments; 2SLS standard errors are not robust to weak instruments, and inference based on 2SLS standard errors can be very misleading as a result. See Andrews and Stock (2005) for a review of these issues. We implement a version of the Anderson–Rubin test that is robust to heteroskedasticity and arbitrary within-country correlation of the residuals. The dependent variable in columns 1–3 is a democratic transition indicator that is equal to unity in year t if and only if the country is a democracy in t + 1 but a nondemocracy in t (the year t indicator is not defined if the country is a democracy in t ). The dependent variable in columns 4–6 is a democratization step indicator that is equal to unity in year t if and only if the country is upgraded to either a partial or full democracy between t and t + 1 (the year t indicator is not defined if the country is a full democracy in t ). For further detail on the coding of the dependent variables, see the main text. Country specific recession is an indicator variable that is unity if and only if per capita GDP falls below the country-specific time trend for reasons other than shocks affecting all sub-Saharan countries (see equation (3)). *Significantly different from zero at 90 percent confidence; **95 percent confidence; ***99 percent confidence.
income shock of 1 percent increases the probability of democratization by 1.3 percentage points. Column 3 shows that following recessions, the probability of a democratic transition increases by 23.5 percentage points and that the effect is statistically significant at the 95 percent confidence level.25 25
Bratton and van de Walle (1997) discussed democratic transitions in Africa over the 1988– 1994 period and argued that transitions are largely explained by domestic political forces rather than by domestic economic conditions. Our results indicate that country-specific economic factors did play a role over the 1980–2004 period (there are too few transitions for the 1988–1994 period for statistical analysis).
RAIN AND THE DEMOCRATIC WINDOW OF OPPORTUNITY
943
The results for the democratization step indicator in Table IX, columns 4–6, are similar to the results for democratic transitions. Least squares estimation in column 4 yields a very small and statistically insignificant effect. But 2SLS estimation in columns 5 and 6 yields a statistically significant increase in the probability of a step toward democracy following negative income shocks. For example, according to column 5, a transitory negative income shock of 1 percent increases the probability of a step toward democracy by 1.5 percentage points, and the effect is statistically significant at the 95 percent confidence level. Column 6 indicates that a step toward democracy is 27.9 percentage points more likely following a recession and that this effect is also statistically significant at the 95 percent confidence level. Our interpretation of the effect of rainfall shocks on democratic change is that a negative rainfall shock opens a window of opportunity for democratic improvement because it translates into a transitory negative GDP shock and hence a lower opportunity cost of contesting power. If this interpretation is correct, the effect of rainfall shocks on democratic change should be absent in countries where rainfall shocks do not affect GDP. Moreover, if rainfall shocks affect GDP through agricultural output, the effect of rainfall shocks on GDP should be weak in countries with small agricultural sectors.26 It is, therefore, interesting to examine whether there is evidence of weak effects of rainfall shocks on democratic change and on per capita GDP in countries with relatively small agricultural sectors. To do so, we use data from the World Development Indicators (WDI) (2009) to calculate the average agricultural GDP share over the 1980–2004 period for each country in our sample and we analyze the effect of rainfall shocks on GDP and on democratic change in countries with agricultural GDP shares below the median.27 The results in the top panel of Table X show that the effect of rainfall shocks on GDP per capita is statistically insignificant in these countries (see column 1) and that the effect of rainfall shocks on democratic change is also statistically insignificant (see columns 2–5). This result is consistent with rainfall shocks affecting democratic institutions through income. The finding also suggests that rainfall does not have (strong) direct effects on democratic change.28 26 The Supplemental Material Appendix shows that rainfall has a highly statistically significant, positive effect on agricultural output in our sample (see Dell, Jones, and Olken (2008), for evidence on the positive effect of rainfall on agricultural value added in a wider sample of countries). 27 The median agricultural GDP share in our sample is 34 percent and the average agricultural share in below-median countries is 18 percent. 28 The bottom panel of Table X shows results for countries with agricultural sectors above the median (the average agricultural share in these countries is 44 percent). Rainfall has a significantly positive effect on GDP and a significantly negative effect on democratic improvement in these countries (and the point estimates are larger in absolute value than for countries with agricultural shares below the median).
944
M. BRÜCKNER AND A. CICCONE TABLE X RAIN, AGRICULTURE, GDP, AND DEMOCRATIC CHANGEa
Log GDP (1)
Log rainfall, t
0.031 (0.032)
Log rainfall, t − 1
0.003 (0.036)
Country fixed effect Country time trend Common time effect Observations
Yes Yes Yes 468
Log rainfall, t
0.130*** (0.045)
Log rainfall, t − 1
0.088 (0.056) Yes Yes Yes 487
Country fixed effect Country time trend Common time effect Observations
Polity2
(2)
(3)
Democratic Transition (4)
Panel A: Below the Sample Median 0.240 0.181 −0.010 (0.380) (0.386) (0.039) −0.885 (0.734) Yes Yes Yes 468
−1.010 (0.730) Yes Yes Yes 450
−0.083 (0.084) Yes Yes Yes 336
Panel B: Above the Sample Median 0.519 0.011 0.070 (0.685) (0.840) (0.070) −2.773* (1.430) Yes Yes Yes 487
−3.490*** (1.329) Yes Yes Yes 452
−0.207** (0.090) Yes Yes Yes 364
Democratic Step (5)
0.021 (0.020) −0.042 (0.067) Yes Yes Yes 396 0.021 (0.049) −0.297*** (0.105) Yes Yes Yes 471
a The method of estimation is least squares; Huber robust standard errors (in parentheses) are clustered at the country level. Panel A computes regressions for countries whose 1980–2004 agricultural share in GDP was below the sample median; Panel B computes regressions for those whose 1980–2004 agricultural share is above the sample median. The dependent variable in column 1 is the log of real per capita GDP; in column 2, the dependent variable is the t to t + 1 change in the revised combined Polity score (Polity2); column 3 excludes observations that correspond to interregnum periods; in column 4, the dependent variable is a democratic transition indicator that is equal to unity in year t if and only if the country is a democracy in t + 1 but a nondemocracy in t (the year t indicator is not defined if the country is a democracy in t ); in column 5, the dependent variable is a democratization step indicator that is equal to unity in year t if and only if the country is upgraded to either a partial or full democracy between t and t + 1 (the year t indicator is not defined if the country is a full democracy in t ). For further detail on the coding of the dependent variables, see the main text. The average share of agriculture in GDP is from WDI (2009). *Significantly different from zero at 90 percent confidence; **95 percent confidence; ***99 percent confidence.
5. CONCLUSIONS It has long been argued that democratic improvement is often triggered by economic recessions. As emphasized by the literature on political sociology, this could be for several reasons. For example, Lipset (1959) and Huntington (1991) argued that economic recessions lead to autocratic regimes losing legitimacy, partly because recessions are taken as a sign of government incompetence. In Acemoglu and Robinson’s (2001) economic approach to political transitions, on the other hand, economic shocks may give rise to political change even if shocks are (known to be) exogenous and transitory. This is because such shocks imply a temporary fall in the opportunity costs of contesting power. We examine the effect of exogenous, transitory income shocks on po-
RAIN AND THE DEMOCRATIC WINDOW OF OPPORTUNITY
945
litical transitions by exploiting within-country rainfall shocks in sub-Saharan Africa, where such shocks have a significant but transitory impact on GDP. Our analysis yields that negative rainfall shocks lead to significant democratic improvement and, in particular, a tightening of executive constraints, greater political competition, and more open and competitive executive recruitment. Our instrumental variables results indicate that improvements in democratic institutions triggered by transitory negative income shocks can be substantial. For example, rainfall-driven recessions are followed by an improvement in the score for executive constraints by 19 percentage points and an improvement in the scores for political competition and for the openness and competitiveness of executive recruitment by 17 percentage points. REFERENCES ACEMOGLU, D., AND J. ROBINSON (2001): “A Theory of Political Transitions,” American Economic Review, 91, 938–963. [923,933,941,944] (2006): Economic Origins of Dictatorship and Democracy. New York: Cambridge University Press. [923,926] ACEMOGLU, D., S. JOHNSON, J. ROBINSON, AND P. YARED (2008): “Income and Democracy,” American Economic Review, 98, 808–842. [926] (2009): “Reevaluating the Modernization Hypothesis,” Journal of Monetary Economics, 56, 1043–1058. [926] ADLER, R., G. HUFFMAN, A. CHANG, R. FERRARO, P. XIE, J. JANOWIAK, B. RUDOLF, U. SCHNEIDER, S. CURTIS, D. BOLVIN, A. GRUBER, J. SUSSKIND, P. ARKIN, AND E. NELKIN (2003): “The Version 2 Global Precipitation Climatology Project (GPCP) Monthly Precipitation, Analysis (1979–Present),” Journal of Hydrometeorology, 4, 1147–1167. [929] ANDREWS, D., AND J. STOCK (2005): “Inference With Weak Instruments,” Technical Paper 0313, NBER. [936-938,940,942] BARRIOS, S., L. BERTINELLI, AND E. STROBL (2010): “Trends in Rainfall and Economic Growth in Africa: A Neglected Cause of the African Growth Tragedy,” Review of Economics and Statistics, 92, 350–366. [925] BARRO, R. (1999): “Determinants of Democracy,” Journal of Political Economy, 107, S158–S183. [926] BENSON, C., AND E. CLAY (1998): “The Impact of Drought on Sub-Saharan Economies,” Technical Paper 401, World Bank, Washington. [925] BERGER, H., AND M. SPOERER (2001): “Economic Crises and the European Revolutions of 1848,” Journal of Economic History, 61, 293–326. [926] BLUNDELL, R., AND S. BOND (1998): “Initial Conditions and Moment Restrictions in Dynamic Panel Data Models,” Journal of Econometrics, 87, 115–143. [939] BOWMAN, A., AND A. AZZALINI (1997): Applied Smoothing Techniques for Data Analysis. Oxford: Clarendon Press. [935] BRATTON, M., AND N. VAN DE WALLE (1997): Democratic Experiments in Africa: Regime Transitions in Comparative Perspective. New York: Cambridge University Press. [942] BRÜCKNER, M., AND A. CICCONE (2008): “Rain and the Democratic Window of Opportunity,” Discussion Paper 6691, CEPR. [939] (2011): “Supplement to ‘Rain and the Democratic Window of Opportunity’,” Econometrica Supplemental Material, 79, http://www.econometricsociety.org/ecta/Supmat/8183_data and programs.zip; http://www.econometricsociety.org/ecta/Supmat/8183_tables.pdf. [926] BURKE, P., AND A. LEIGH (2010): “Do Output Contractions Trigger Democratic Change?” American Economic Journal: Macroeconomics, 2, 124–157. [926]
946
M. BRÜCKNER AND A. CICCONE
CEKAN, J. (1993): “Famine Coping Strategies in Central Mali,” GeoJournal, 30, 147–151. [926] COLLIER, P., AND A. HOEFFLER (1998): “On Economic Causes of Civil War,” Oxford Economic Papers, 50, 563–573. [926] DEATON, A. (2005): “Measuring Poverty in a Growing World (or Measuring Growth in a Poor World),” Review of Economics and Statistics, 87, 1–19. [937] DELL, M., B. JONES, AND B. OLKEN (2008): “Climate Shocks and Economic Growth: Evidence From the Last Half Century,” Working Paper 14132, NBER. [943] EPSTEIN, D., R. BATES, J. GOLDSTONE, I. KRISTENSEN, AND S. O’HALLORAN (2006): “Democratic Transitions,” American Journal of Political Science, 50, 551–569. [925,926,928,939] FAFCHAMPS, M., C. UDRY, AND K. CZUKAS (1998): “Drought and Saving in West Africa: Are Livestock a Buffer Stock?” Journal of Development Economics, 55, 273–305. [926] FREEDOM HOUSE (2007): Freedom in the World Country Ratings, 1972–2007. Washington, DC: Freedom House. Available at http://www.freedomhouse.org. [937] GEDDES, B. (1999): “What Do We Know About Democratization After Twenty Years?” Annual Review of Political Science, 2, 115–144. [926] GLAESER, E., R. LA PORTA, F. LOPEZ-DE-SILANES, AND A. SHLEIFER (2004): “Do Institutions Cause Growth?” Journal of Economic Growth, 9, 271–303. [924] GREENE, W. (2003): Econometric Analysis. New York: Prentice Hall. [931] (2004): “The Behavior of the Maximum Likelihood Estimator of Limited Dependent Variable Models in the Presence of Fixed Effects,” Econometrics Journal, 7, 98–119. [931] HAGGARD, S., AND R. KAUFMAN (1995): The Political Economy of Democratic Transitions. Princeton: Princeton University Press. [923,926] HESTON, A. (1994): “A Brief Review of Some Problems in Using National Accounts Data in Level of Output Comparisons and Growth Studies,” Journal of Development Economics, 44, 29–52. [937] HESTON, A., R. SUMMERS, AND B. ATEN (2006): “Penn World Table Version 6.2,” Center for International Comparisons of Production, Income and Prices, University of Pennsylvania. Available at http://pwt.econ.upenn.edu. [930] HUNTINGTON, S. (1991): The Third Wave: Democratization in the Late Twentieth Century. Norman: University of Oklahoma Press. [923,927,944] LIPSET, S. (1959): “Some Social Prerequisites for Democracy: Economic Development and Political Legitimacy,” American Political Science Review, 53, 69–105. [923,927,944] MARSHALL, M., AND K. JAGGERS (2005): “Polity IV Project: Dataset Users’ Manual,” Center for Global Policy, George Mason University. Available at www.cidcm.umd.edu/polity. [Polity IV Data Computer File, Version p4v2004, Center for International Development and Conflict Management, University of Maryland, College Park, MD.] [923,927,929] MATSUURA, K., AND C. WILLMOTT (2007): “Terrestrial Air Temperature and Precipitation: 1900–2006 Gridded Monthly Time Series, Version 1.01,” University of Delaware. Available at http://climate.geog.udel.edu/~climate/. [929,933] MIGUEL, E., S. SATYANATH, AND E. SERGENTI (2004): “Economic Shocks and Civil Conflict: An Instrumental Variables Approach,” Journal of Political Economy, 112, 725–753. [925,926] NICHOLSON, S., B. SOME, J. MCCOLLUM, E. NELKIN, D. KLOTTER, Y. BERTE, B. DIALLO, I. GAYE, G. KPABEBA, O. NDIAYE, J. NOUKPOZOUNKOU, M. TANU, A. THIAM, A. TOURE, AND A. TRAORE (2003): “Validation of TRMM and Other Rainfall Estimates With a HighDensity Gauge Dataset for West Africa. Part I: Validation of GPCC Rainfall Product and PreTRMM Satellite and Blended Products,” Journal of Applied Meteorology and Climatology, 42, 1337–1354. [929] PAXSON, C. (1992): “Using Weather Variability to Estimate the Response of Savings to Transitory Income in Thailand,” American Economic Review, 82, 15–33. [926] PERSSON, T., AND G. TABELLINI (2003): The Economic Effects of Constitutions. Cambridge: MIT Press. [925,928,939] (2006): “Democracy and Development. The Devil in Detail,” American Economic Review, 96, 319–324. [928,939]
RAIN AND THE DEMOCRATIC WINDOW OF OPPORTUNITY
947
(2008): “The Growth Effect of Democracy: Is It Heterogeneous and How Can It Be Estimated?” in Institutions and Economic Performance, ed. by E. Helpman. Cambridge, MA: Harvard University Press. [928,939] PRZEWORSKI, A., AND F. LIMONGI (1997): “Modernization: Theories and Facts,” World Politics, 49, 155–183. [926] PRZEWORSKI, A., M. ALVAREZ, J. CHEIBUB, AND F. LIMONGI (2000): Democracy and Development: Political Institutions and the Well-Being of the World, 1950–1990. Cambridge: Cambridge University Press. [926,939] RUDOLF, B., H. HAUSCHILD, W. RÜTH, AND U. SCHNEIDER (1994): “Terrestrial Precipitation Analysis: Operational Method and Required Density of Point Measurements,” in NATO ASI I/26, Global Precipitations and Climate Change, ed. by M. Desbois and F. Desalmand. Berlin: Springer-Verlag, 173–186. [930] WOOLDRIDGE, J. (2002): Econometric Analysis of Cross Section and Panel Data. Cambridge: MIT Press. [931] WORLD DEVELOPMENT INDICATORS (2009): Online Database, World Bank. [943,944] WORLD METEOROLOGICAL ORGANIZATION (1985): “Review of Requirements for AreaAveraged Precipitation Data, Surface Based and Space Based Estimation Techniques, Space and Time Sampling, Accuracy and Error, Data Exchange,” WCP 100, WMO/TD 115, World Meteorological Organization, Geneva. [930]
Dept. of Economics and Business, Universitat Pompeu Fabra, Ramon Trias Fargas 25, Barcelona 08005, Spain;
[email protected] and Dept. of Economics and Business, Universitat Pompeu Fabra-ICREA, Ramon Trias Fargas 25, Barcelona 08005, Spain and Barcelona GSE; antonio.ciccone@ upf.edu. Manuscript received October, 2008; final revision received May, 2010.
Econometrica, Vol. 79, No. 3 (May, 2011), 949–955
NOTES AND COMMENTS PARTIAL IDENTIFICATION IN TRIANGULAR SYSTEMS OF EQUATIONS WITH BINARY DEPENDENT VARIABLES BY AZEEM M. SHAIKH AND EDWARD J. VYTLACIL1 This paper studies the special case of the triangular system of equations in Vytlacil and Yildiz (2007), where both dependent variables are binary but without imposing the restrictive support condition required by Vytlacil and Yildiz (2007) for identification of the average structural function (ASF) and the average treatment effect (ATE). Under weak regularity conditions, we derive upper and lower bounds on the ASF and the ATE. We show further that the bounds on the ASF and ATE are sharp under some further regularity conditions and an additional restriction on the support of the covariates and the instrument. KEYWORDS: Partial identification, simultaneous equation model, binary dependent variable, endogeneity, threshold crossing model, weak separability, average structural function, average treatment effect.
1. INTRODUCTION THIS PAPER STUDIES the special case of the triangular system of equations in Vytlacil and Yildiz (2007), where both dependent variables are binary. Under the weak separability assumptions imposed by Vytlacil and Yildiz (2007), such a model may, without loss of generality, be written as2 (1)
Y = I{ν1 (D X) ≥ ε1 } D = I{ν2 (Z) ≥ ε2 }
Here, Y denotes the observed binary outcome of interest, D denotes the observed binary endogenous regressor, X and Z are observed random vectors, and ε1 and ε2 are unobserved random variables. We additionally assume some mild regularity on the distribution of (ε1 ε2 ) and that X and Z are exogenous in the sense that (X Z) ⊥⊥ (ε1 ε2 ). Under these assumptions, we derive upper and lower bounds on the average structural function (ASF) and the average treatment effect (ATE), which may be expressed, respectively, as G1 (d x) = Pr{Yd = 1|X = x} G1 (x) = Pr{Y1 = 1|X = x} − Pr{Y0 = 1|X = x} 1 An earlier version of this paper titled “Threshold Crossing Models and Bounds on Treatment Effects: A Nonparametric Analysis” appeared in May 2005 as NBER Technical Working Paper 307. We would like to thank Hide Ichimura, Jim Heckman, Whitney Newey, and Jim Powell for very helpful comments on this paper. This research was conducted in part while Edward Vytlacil was in residence at Hitotsubashi University. This research was supported by NSF SES05-51089 and DMS-08-20310. 2 This can be shown by appropriately adapting arguments in Vytlacil (2002).
© 2011 The Econometric Society
DOI: 10.3982/ECTA9082
950
A. M. SHAIKH AND E. J. VYTLACIL
where Yd = I{ν1 (d X) ≥ ε1 } and (d x) denotes a potential realization of (D X). Vytlacil and Yildiz (2007) established identification of the ASF and the ATE when the support of the distribution of X conditional on Pr{D = 1|Z} is sufficiently rich. This support condition would be expected to fail near the boundaries of the support of X. In particular, it would be expected to fail when X is a discrete random variable. In this paper, we do not impose any such support restriction. Under further assumptions, we show that the bounds we derive on the ASF and ATE are sharp in the sense that for any value lying between the upper and lower bounds, there will exist a distribution of unobservable variables satisfying all of the assumptions of our analysis that is consistent with both the distribution of the observed data and the proposed value of the ASF or the ATE. In subsequent work, Chiburis (2010) showed that our bounds may not be sharp when these additional assumptions are not satisfied. 2. IDENTIFICATION ANALYSIS Formally, we will make use of the following assumptions in our analysis: ASSUMPTION 2.1: (X Z) ⊥⊥ (ε1 ε2 ). ASSUMPTION 2.2: The distribution of (ε1 ε2 ) has strictly positive density with respect to (w.r.t.) Lebesgue measure on R2 . ASSUMPTION 2.3: The support of the distribution of (X Z), supp(X Z), is compact. ASSUMPTION 2.4: The functions ν1 (·) and ν2 (·) are continuous. ASSUMPTION 2.5: The distribution of ν2 (Z)|X is nondegenerate. Our analysis below is similar to Chesher (2005), but his analysis requires a rank condition that can only hold in trivial cases when D is binary. Jun, Pinkse, and Xu (2009) relaxed this rank condition so that it may hold nontrivially when D is binary, but they impose an additional assumption on the dependence between ε1 and ε2 . Note that it follows from Assumptions 2.1 and 2.2 that we may, without loss of generality, normalize ε2 ∼ U(0 1) and ν2 (Z) = P(Z) = Pr{D = 1|Z}. We may sometimes write P in place of P(Z). After such a normalization, Assumption 2.2 becomes the requirement that the distribution of (ε1 ε2 ) has a strictly positive density w.r.t. Lebesgue measure on R × [0 1]. Furthermore, note that Assumptions 2.1–2.4 imply that P is bounded away from 0 and 1. We will henceforth work with the normalized model. Consider first identification of G1 (1 x). By equation (1) and Assumption 2.1, we have that Pr{Y1 = 1|X} = Pr{Y1 = 1|X P(Z)} and Pr{D =
TRIANGULAR SYSTEMS OF EQUATIONS
951
1|X P(Z)} = P(Z). Since the events {D = 1 Y = 1} and {D = 1 Y1 = 1} are the same, Pr{Y1 = 1|X P(Z)} = Pr{D = 1 Y1 = 1|X P(Z)} + Pr{D = 0 Y1 = 1|X P(Z)} = Pr{D = 1 Y = 1|X P(Z)} + (1 − P(Z)) Pr{Y1 = 1|X P(Z) D = 0} The terms P(Z) and Pr{D = 1 Y = 1|X P(Z)} are identified, but the term Pr{Y1 = 1|X P(Z) D = 0} is not identified. Since Y is binary, this unidentified term is bounded from above and below by 1 and 0, so Pr{D = 1 Y = 1|X P(Z)} ≤ Pr{Y1 = 1|X} ≤ Pr{D = 1 Y = 1|X P(Z)} + (1 − P(Z)) Since Pr{Y1 = 1|X} does not depend on P(Z), we can take the supremum of the lower bounds and the infimum of the upper bounds over values of P(Z). Parallel reasoning provides bounds on Pr{Y0 = 1|X = x}. The next lemma uses equation (1) together with the other assumptions of our analysis to determine the sign of ν1 (1 x ) − ν1 (0 x) from a modified instrumental variables-like term that is identified. Depending on the sign of ν1 (1 x ) − ν1 (0 x), we will then be able to bound Pr{Y1 = 1|D = 0 X = x P = p} and Pr{Y0 = 1|D = 1 X = x P = p} from above or below by terms other than 1 or 0 that are identified. LEMMA 2.1: Suppose Y and D are determined by (1) and that Assumptions 2.1 and 2.2 hold. Let h(x x p p ) = (Pr{D = 1 Y = 1|X = x P = p} − Pr{D = 1 Y = 1|X = x P = p }) − (Pr{D = 0 Y = 1|X = x P = p } − Pr{D = 0 Y = 1|X = x P = p}) Then, whenever all conditional probabilities are well defined, we have for p > p that h(x x p p ) and ν1 (1 x ) − ν1 (0 x) share the same sign. In particular, the sign of h(x x p p ) does not depend on p or p provided p > p . For the proof, see the Supplemental Material (Shaikh and Vytlacil (2011)). Before proceeding with the statement of the main theorem, we illustrate the use of Lemma 2.1 in characterizing the possible values for Pr{Y1 = 1|D =
952
A. M. SHAIKH AND E. J. VYTLACIL
0 X = x P = p} and Pr{Y0 = 1|D = 1 X = x P = p}. Denote by P a random variable distributed independently of P with the same distribution as P. Define (2)
H(x x ) = E[h(x x P P )|P > P ]
where h(x x p p ) = 0 whenever it is not well defined. Suppose there exists p > p for which h(x x p p ) is well defined, that is, p > p with both p and p in supp(P|X = x) ∩ supp(P|X = x ). Recall that the sign of h(x x p p ) does not depend on p or p provided p > p . If H(x x ) ≥ 0, then it follows from Lemma 2.1 that ν1 (1 x ) ≥ ν1 (0 x). Therefore, Pr{Y0 = 1|D = 1 X = x P = p} = Pr{ε1 ≤ ν1 (0 X)|D = 1 X = x P = p} ≤ Pr{ε1 ≤ ν1 (1 X)|D = 1 X = x P = p} = Pr{Y = 1|D = 1 X = x P = p} where the first and third equalities follow from equation (1), and the inequality follows from the fact that ν1 (1 x ) ≥ ν1 (0 x) and Assumption 2.2. If, on the other hand, H(x x ) ≤ 0, then we can argue along similar lines to bound Pr{Y0 = 1|D = 1 X = x P = p} from below by Pr{Y = 1|D = 1 X = x P = p}. We can thus bound the unidentified terms Pr{Y0 = 1|D = 1 X = x P = p} and Pr{Y1 = 1|D = 0 X = x P = p} by lower and upper bounds that differ from 0 and 1. We now state our main theorem. In the statement of the theorem, it is understood that all supremums and infimums are only taken over regions where all conditional probabilities are well defined, the supremum over the empty set is 0, and the infimum over the empty set is 1. THEOREM 2.1: Suppose Y and D are determined by (1). Let X0+ (x) = {x : H(x x ) ≥ 0}, X0− (x) = {x : H(x x ) ≤ 0}, X1+ (x) = {x : H(x x) ≥ 0}, and X1− (x) = {x : H(x x) ≤ 0}, where H(x x ) is defined in (2) if h(x x p p ) is well defined for some p > p , and with each set understood to be empty if h(x x p p ) is not well defined for any p > p . Then we have the following statements: (i) If Assumptions 2.1 and 2.2 hold, then G1 (d x) ∈ [Ld (x) Ud (x)] for d ∈ {0 1} and G1 (x) ∈ [L (x) U (x)], where L (x) = L1 (x) − U0 (x) U (x) = U1 (x) − L0 (x), and L0 (x) = sup Pr{D = 0 Y = 1|X = x P = p} p
+ sup Pr{D = 1 Y = 1|X = x P = p} x ∈X0− (x)
TRIANGULAR SYSTEMS OF EQUATIONS
953
L1 (x) = sup Pr{D = 1 Y = 1|X = x P = p} p
+ sup Pr{D = 0 Y = 1|X = x P = p} x ∈X1+ (x)
U0 (x) = inf Pr{D = 0 Y = 1|X = x P = p} p
+p
inf
x ∈X0+ (x)
Pr{Y = 1|D = 1 X = x P = p}
U1 (x) = inf Pr{D = 1 Y = 1|X = x P = p} p
+ (1 − p)
inf
x ∈X1− (x)
Pr{Y = 1|D = 0 X = x P = p}
(ii) If Assumptions 2.1 and 2.2 hold and supp(P X) = supp(P) × supp(X), then the above expressions for Ld (x) and Ud (x) for d ∈ {0 1} simplify as L0 (x) = Pr{D = 0 Y = 1|X = x P = p} + sup Pr{D = 1 Y = 1|X = x P = p} x ∈X0− (x)
L1 (x) = Pr{D = 1 Y = 1|X = x P = p} + sup Pr{D = 0 Y = 1|X = x P = p} x ∈X1+ (x)
U0 (x) = Pr{D = 0 Y = 1|X = x P = p} +p
inf
x ∈X0+ (x)
Pr{Y = 1|D = 1 X = x P = p}
U1 (x) = Pr{D = 1 Y = 1|X = x P = p} + (1 − p)
inf
x ∈X1− (x)
Pr{Y = 1|D = 0 X = x P = p}
where p = inf{p : p ∈ supp(P)} and p = sup{p : p ∈ supp(P)}. (iii) If Assumptions 2.1–2.4 hold and supp(P X) = supp(P) × supp(X), then the above bounds are sharp. The proof is given in the Supplemental Material.
954
A. M. SHAIKH AND E. J. VYTLACIL
As a corollary, we have immediately that the sign of G1 (x) is identified whenever h(x x p p ) is well defined for some p > p . This will be the case whenever Assumption 2.5 holds. COROLLARY 2.1: Suppose that Y and D satisfy (1) and that Assumptions 2.1, 2.2, and 2.5 hold. Then the sign of G1 (x) is identified. REMARK 2.1: The bounds of Theorem 2.1 reduce to those in Manski (1989) if Assumption 2.5 does not hold. The bounds are smaller the more variation there is in X conditional on P(Z). In the extreme case where X is degenerate conditional on P(Z), the bounds reduce to the same form as the Manski and Pepper (2000) bounds under monotone treatment response even though the assumptions are different. See the analysis in Bhattacharya, Shaikh, and Vytlacil (2008) for details. REMARK 2.2: It is interesting to ask when the upper and lower bounds will equal one another for the ASF or the ATE, that is, when the bounds imply that the ASF or the ATE is identified. Suppose that supp(P X) = supp(P) × supp(X) and that the sets Xd+ (x) and Xd− (x) for d ∈ {0 1} are nonempty. Consider G1 (0 x). The analysis for G1 (1 x) and G1 (x) is similar. The width of the bounds on G1 (0 x) is equal to (3)
inf
x ∈X0+ (x)
Pr{D = 1 Y = 1|X = x P = p}
− sup Pr{D = 1 Y = 1|X = x P = p} x ∈X0− (x)
Suppose there exists x∗ such that H(x x∗ ) = 0. It follows that x∗ ∈ X0+ (x) ∩ X0− (x) and (3) is less than or equal to Pr{D = 1 Y = 1|X = x∗ P = p} − sup Pr{D = 1 Y = 1|X = x P = p} ≤ 0 x ∈X0− (x)
Since (3) is greater than or equal to 0 by construction, it follows that G1 (0 x) is identified whenever there exists x∗ such that H(x x∗ ) = 0. Using Lemma 2.1, we may state this condition equivalently as the existence of a x∗ such that ν1 (1 x) = ν1 (0 x∗ ). REMARK 2.3: It is worth noting that there are several testable implications of equation (1) and Assumptions 2.1 and 2.2. A straightforward implication is that Pr{D = 1|X Z} does not depend on X, and, as noted earlier, Lemma 2.1 implies that h(x x p p ) does not depend on p or p provided p > p whenever all conditional probabilities are well defined. It is also possible to show that for d ∈ {0 1}, there exists a real-valued function Qd (·)
TRIANGULAR SYSTEMS OF EQUATIONS
955
such that Pr{Y = 1 D = d|X Z} = Pr{Y = 1 D = d|Qd (X) P(Z)}. Moreover, Pr{Y = 1 D = 1|Q1 (X) = q P = p} is strictly increasing in both q and p, while Pr{Y = 1 D = 0|Q0 (X) = q P = p} is strictly increasing in q and strictly decreasing in p. REFERENCES BHATTACHARYA, J., A. SHAIKH, AND E. VYTLACIL (2008): “Treatment Effect Bounds Under Monotonicity Conditions: An Application to Swan–Ganz Catheterization,” American Economic Review, Papers and Proceedings, 98, 351–356. [954] CHESHER, A. (2005): “Nonparametric Identification Under Discrete Variation,” Econometrica, 73, 1525–1550. [950] CHIBURIS, R. (2010): “Semiparametric Bounds on Treatment Effects,” Journal of Econometrics, 159, 267–275. [950] JUN, S. J., J. PINKSE, AND H. XU (2009): “Tighter Bounds in Triangular Systems,” Working Paper, Penn State University. [950] MANSKI, C. (1989): “Anatomy of the Selection Problem,” Journal of Human Resources, 24, 343–360. [954] MANSKI, C., AND J. PEPPER (2000): “Monotone Instrumental Variables With an Application to the Returns to Schooling,” Econometrica, 68, 997–1010. [954] SHAIKH, A. M., AND E. J. VYTLACIL (2011): “Supplement to ‘Partial Identification in Triangular Systems of Equations With Binary Dependent Variables’: Appendix,” Econometrica Supplemental Material, 79, http://www.econometricsociety.org/ecta/Supmat/9082_proofs.pdf. [951] VYTLACIL, E. (2002): “Independence, Monotonicity, and Latent Index Models: An Equivalence Result,” Econometrica, 70, 331–341. [949] VYTLACIL, E., AND N. YILDIZ (2007): “Dummy Endogenous Variables in Weakly Separable Models,” Econometrica, 75, 757–779. [949,950]
Dept. of Economics, University of Chicago, 1126 East 59th Street, Chicago, IL 60637, U.S.A.;
[email protected] and Dept. of Economics, Yale University, New Haven, CT 06520-8281, U.S.A.;
[email protected]. Manuscript received February, 2010; final revision received October, 2010.
Econometrica, Vol. 79, No. 3 (May, 2011), 957–960
ANNOUNCEMENTS 2011 NORTH AMERICAN SUMMER MEETING
THE 2011 NORTH AMERICAN SUMMER MEETING of the Econometric Society will be held June 9–12, 2011, at Washington University in St. Louis, MO. The program will include submitted papers as well as the Presidential Address by John Moore (University of Edinburgh), the Walras-Bowley Lecture by Manuel Arellano (CEMFI), the Cowles Lecture by Michael Keane (University of New South Wales and Arizona State University), and the following semi-plenary sessions: Behavioral economics Jim Andreoni, University of California, San Diego Ernst Fehr, University of Zurich Decision theory Bart Lipman, Boston University Wolfgang Pesendorfer, Princeton University Development economics Abhijit Bannerjee, Massachusetts Institute of Technology Edward Miguel, University of California, Berkeley Financial and informational frictions in macroeconomics George-Marios Angeletos, Massachusetts Institute of Technology Nobu Kiyotaki, Princeton University Game theory John Duggan, University of Rochester Ehud Kalai, Northwestern University Microeconometrics Guido Imbens, Harvard University Costas Meghir, University College London Networks Steven Durlauf, University of Wisconsin–Madison Brian Rogers, Northwestern University Time Series Econometrics Bruce E. Hansen, University of Wisconsin–Madison Ulrich Müller, Princeton University Urban Enrico Moretti, University of California, Berkeley Esteban Rossi-Hansberg, Princeton University © 2011 The Econometric Society
DOI: 10.3982/ECTA793ANN
958
ANNOUNCEMENTS
Information on local arrangements will be available at http://artsci.wustl. edu/~econconf/EconometricSociety/. Meeting Organizers: Marcus Berliant, Washington University in St. Louis (Chair) David K. Levine, Washington University in St. Louis John Nachbar, Washington University in St. Louis Program Committee: Donald Andrews, Yale University Marcus Berliant, Washington University in St. Louis Steven Berry, Yale University Ken Chay, University of California, Berkeley Sid Chib, Washington University in St. Louis John Conley, Vanderbilt University Charles Engel, University of Wisconsin Amy Finkelstein, Massachusetts Institute of Technology Sebastian Galiani, Washington University in St. Louis Donna Ginther, University of Kansas Bart Hamilton, Washington University in St. Louis Paul J. Healy, Ohio State University Gary Hoover, University of Alabama Tasos Kalandrakis, University of Rochester David Levine, Washington University in St. Louis Rody Manuelli, Washington University in St. Louis John Nachbar, Washington University in St. Louis Ray Riezman, University of Iowa Aldo Rustichini, University of Minnesota Suzanne Scotchmer, University of California, Berkeley William Thomson, University of Rochester Chris Waller, Federal Reserve Bank of St. Louis Ping Wang, Washington University in St. Louis 2011 AUSTRALASIA MEETING
THE 2011 AUSTRALASIA MEETING of the Econometric Society in 2011 (ESAM11) will be held in Adelaide, Australia, from July 5 to July 8, 2011. ESAM11 will be hosted by the School of Economics at the University of Adelaide. The program committee will be co-chaired by Christopher Findlay and Jiti Gao. The program will include plenary, invited and contributed sessions in all fields of economics.
ANNOUNCEMENTS
959
2011 ASIAN MEETING
THE 2011 ASIAN MEETING of the Econometric Society will be held in Seoul, Korea, in the campus of Korea University in Seoul from August 11 to August 13, 2011. The program will consist of invited and contributed papers. Authors are encouraged to submit papers across the broad spectrum of theoretical and applied research in economics and in econometrics. The meeting is open to all economists including those who are not currently members of the Econometric Society. The preliminary program is scheduled to be announced on March 31, 2011. Although the deadline for general registration is June 30, 2011, the authors of the papers in the preliminary program will be required to register early by April 30, 2011. Otherwise, their submissions will be understood to be withdrawn. We plan to announce the final program by the end of May. Please refer to the conference website http://www.ames2011.org for more information. IN-KOO CHO AND JINYONG HAHN Co-Chairs of Program Committee 2011 EUROPEAN MEETING
THE 2011 EUROPEAN MEETING of the Econometric Society (ESEM) will take place in Oslo, Norway, from 25 to 29 August, 2011. The Meeting is jointly organized by the University of Oslo and it will run in parallel with the Congress of the European Economic Association (EEA). Participants will be able to attend all sessions of both events. The Program Committee Chairs are John van Reenen (London School of Economics) for Econometrics and Empirical Economics, and Ernst-Ludwig von Thadden (University of Mannheim) for Theoretical and Applied Economics. The Local Arrangements Chair is Asbjørn Rødseth (University of Oslo). Each author may submit only one paper to the ESEM and only one paper to the EEA Congress. The same paper cannot be submitted to both ESEM and the EEA Congress. At least one co-author must be a member of the Econometric Society or join at the time of submission. Decisions will be notified by 15 April, 2011. Paper presenters must register by 1 May, 2011. 2011 LATIN AMERICAN MEETING
THE 2011 LATIN AMERICAN MEETINGS will be held jointly with the Latin American and Caribbean Economic Association in Santiago, Chile, from November 10 to 12, 2011. The Meetings will be hosted by Universidad Adolfo Ibaniez.
960
ANNOUNCEMENTS
The Annual Meetings of these two academic associations will be run in parallel, under a single local organization. By registering for LAMES 2011, participants will be welcome to attend to all sessions of both meetings. The Program Chair is Andrea Repetto. The Local Organizers are Andrea Repetto, Matias Braun, Fernando Larrain, and Claudio Soto. The deadline for submissions is June 1, 2011 and decisions will be sent July 10, 2011. Authors can submit only one paper to each meeting, but the same paper cannot be submitted to both meetings. Authors submitting papers must be a member of the respective association at the time of submission. Membership information can be found at http://www. econometricsociety.org and http://www.lacea.org. A limited number of papers will be invited to be presented in poster sessions that will be organized by topic. Further information can be found at the conference website at http://www. lacealames2011.cl or by email at
[email protected]. 2012 NORTH AMERICAN WINTER MEETING
THE 2012 NORTH AMERICAN WINTER MEETING of the Econometric Society will be held in Chicago, IL, on January 6–8, 2012, as part of the annual meeting of the Allied Social Science Associations. The program will consist of contributed and invited papers. The program committee invites contributions in the form of individual papers and entire sessions (of three or four papers). Each person may submit and present only one paper, but may be the co-author of several papers submitted to the conference. At least one co-author must be a member of the Society. You may join the Econometric Society at http://www.econometricsociety.org. The submissions should represent original manuscripts not previously presented to any Econometric Society regional meeting or submitted to other professional organizations for presentation at these same meetings. Prospective contributors are invited to submit titles and abstracts of their papers by May 4, 2011 at the conference website: https://editorialexpress.com/conference/NAWM2012 Authors that submit complete papers are treated favorably. JONATHAN LEVIN Program Committee Chair
Econometrica, Vol. 79, No. 3 (May, 2011), 961
FORTHCOMING PAPERS THE FOLLOWING MANUSCRIPTS, in addition to those listed in previous issues, have been accepted for publication in forthcoming issues of Econometrica. BARRO, ROBERT J., AND TAO JIN: “On the Size Distribution of Macroeconomic Disasters.” BENOÎT, JEAN-PIERRE, AND JUAN DUBRA: “Apparent Overconfidence.” DAROLLES, SERGE, YANQIN FAN, JEAN-PIERRE FLORENS, AND ERIC MICHEL RENAULT: “Nonparametric Instrumental Regression.”
© 2011 The Econometric Society
DOI: 10.3982/ECTA793FORTH
Econometrica, Vol. 79, No. 3 (May, 2011), 963–969
2010 ELECTION OF FELLOWS TO THE ECONOMETRIC SOCIETY
THE FELLOWS OF THE ECONOMETRIC SOCIETY elected sixteen new Fellows in 2010. Their names and selected bibliographies are given below. FRANKLIN ALLEN, Nippon Life Professor of Finance and Professor of Economics, Wharton School, University of Pennsylvania. “Arbitrage, Short Sales and Financial Innovation” (with D. Gale), Econometrica, 59 (1991), 1041–1068. “Finite Bubbles With Short Sale Constraints and Asymmetric Information” (with S. Morris and A. Postlewaite), Journal of Economic Theory, 61 (1993), 206–229. “Financial Markets, Intermediaries, and Intertemporal Smoothing” (with D. Gale), Journal of Political Economy, 105 (1997), 523–546. “Financial Contagion” (with D. Gale), Journal of Political Economy, 108 (2000), 1–33. “Financial Intermediaries and Markets” (with D. Gale), Econometrica, 72 (2004), 1023–1061. “Beauty Contests, Bubbles and Iterated Expectations in Asset Prices” (with S. Morris and H. Shin), Review of Financial Studies, 19 (2006), 719–752. BRUNO BIAIS, Research Professor, Toulouse School of Economics. “Asset Prices and Trading Volume in a Beauty Contest” (with P. Bossaerts), Review of Economic Studies, 65 (1998), 307–340. “Competing Mechanisms in a Common Value Environment” (with D. Martimort and J.-C. Rochet), Econometrica, 68 (2000), 799–837. “Strategic Liquidity Supply and Security Design” (with T. Mariotti), Review of Economic Studies, 72 (2005), 615–649. “Judgmental Overconfidence, Self-Monitoring and Trading Performance in an Experimental Financial Market” (with D. Hilton, K. Mazurier, and S. Pouget), Review of Economic Studies, 72 (2005), 287–312. “Dynamic Security Design” (with T. Mariotti, G. Plantin, and J.-C. Rochet), Review of Economic Studies, 74 (2007), 345–390. “Large Risks, Limited Liability, and Dynamic Moral Hazard” (with T. Mariotti, J.-C. Rochet, and S. Villeneuve), Econometrica, 78 (2010), 73–118.
© 2011 The Econometric Society
DOI: 10.3982/ECTA793EF
964 PETER BOSSAERTS, William D. Hacker Professor of Economics and Management, California Institute of Technology, and Professor, Swiss Finance Institute at Ecole Polytechnique Fédérale Lausanne. “A General Equilibrium Model of Changing Risk Premia: Theory and Test” (with R. C. Green), Review of Financial Studies, 2 (1989), 467–493. “The Econometrics of Learning in Financial Markets,” Econometric Theory, 11 (1995), 151–189. The Paradox of Asset Pricing. Princeton: Princeton University Press (2002). “Prices and Allocations in Financial Markets: Theory, Econometrics, and Experiments” (with C. Plott and W. Zame), Econometrica, 75 (2007), 993– 1038. “Neural Correlates of Mentalizing-Related Computations During Strategic Interactions in Humans” (with A. Hampton and J. O’Doherty), Proceedings of the National Academy of Sciences, 105 (2008), 6741–6746. MARKUS K. BRUNNERMEIER, Edwards S. Sanford Professor of Economics, Princeton University. “Bubbles and Crashes” (with D. Abreu), Econometrica, 71 (2003), 173–204. “Hedge Funds and the Technology Bubble” (with S. Nagel), Journal of Finance, 59 (2004), 2013–2040. “Optimal Expectations” (with J. Parker), American Economic Review, 95 (2005), 1092–1118. “Predatory Trading” (with L. Pedersen), Journal of Finance, 60 (2005), 1825– 1863. “Do Wealth Fluctuations Generate Time-Varying Risk Aversion? MicroEvidence From Individuals’ Asset Allocation” (with S. Nagel), American Economic Review, 98 (2008), 713–736. “Market Liquidity and Funding Liquidity” (with L. Pedersen), Review of Financial Studies, 22 (2009), 2201–2238. PARKASH CHANDER, Professor of Economics, National University of Singapore. “A Planning Process Due to Taylor,” Econometrica, 46 (1978), 761–777. “On the Informational Size of Message Spaces for Efficient Resource Allocation Processes,” Econometrica, 51 (1983), 919–938. “Corruption in Tax Administration” (with L. Wilde), Journal of Public Economics, 49 (1992), 333–349. “Dynamic Procedures and Incentives in Public Good Economie,” Econometrica, 61 (1993), 1341–1354. “The Core of an Economy With Multilateral Environmental Externalities” (with H. Tulkens), International Journal of Game Theory, 26 (1997), 379–401.
965 “A General Characterization of Optimal Income Tax Enforcement” (with L. Wilde), Review of Economic Studies, 65 (1998), 165–183. ESTHER DUFLO, Abdul Latif Jameel Professor of Poverty Alleviation and Development Economics, Massachusetts Institute of Technology. “Schooling and Labor Market Consequences of School Construction in Indonesia: Evidence From an Unusual Policy Experiment,” American Economic Review, 91 (2001), 795–813. “How Much Should We Trust Difference in Differences Estimates?” (with S. Mullainathan and M. Bertrand), Quarterly Journal of Economics, 119 (2004), 249–275. “Women as Policy Makers: Evidence From a Randomized Policy Experiment in India” (with R. Chattopadhyay), Econometrica, 72 (2004), 1409–1443. “Nudging Farmers to Use Fertilizer: Evidence From Kenya” (with M. Kremer and J. Robinson), American Economic Review (forthcoming). “Peer Effects and the Impacts of Tracking: Evidence From a Randomized Evaluation in Kenya” (with P. Dupas and M. Kremer), American Economic Review (forthcoming). JEAN-PIERRE FLORENS, Professor of Statistics and Econometrics, Toulouse School of Economics. “A Note on Non-Causality” (with M. Mouchart), Econometrica, 50 (1982), 583– 592. “A Linear Theory for Non-Causality” (with M. Mouchart), Econometrica, 53 (1985), 157–176. “Non Causality in Continuous Time” (with D. Fougère), Econometrica, 64 (1996), 1195–1212. “Encompassing and Specificity” (with J.-F. Richard and D. Hendry), Econometric Theory, 12 (1996), 620–656. “Generalisation of GMM to a Continuum of Moment Conditions” (with M. Carrasco), Econometric Theory, 16 (2000), 797–834. “Identification of Treatment Effects Using Control Functions in Models With Continuous, Endogenous Treatment and Heterogeneous Effects” (with J. Heckman, C. Meghir, and E. Vytlacil), Econometrica, 76 (2008), 1191– 1206. ROBERT G. KING, Professor of Economics, Boston University. “Monetary Policy and the Information Content of Prices,” Journal of Political Economy, April (1982), 247–279. “Stochastic Trends and Economic Fluctuations” (with C. Plosser, J. Stock, and M. Watson), American Economic Review, 81 (1991), 819–840.
966 “Finance and Growth: Schumpeter Might Be Right” (with R. Levine), Quarterly Journal of Economics, 108 (1993), 717–738. “Measuring Business Cycles: Approximate Band-Pass Filters for Economic Time Series” (with M. Baxter), Review of Economics and Statistics, 81 (1999), 575–593. “State-Dependent Pricing and the General Equilibrium Dynamics of Money and Output” (with M. Dotsey and A. Wolman), Quarterly Journal of Economics, 114 (1999), 655–690. “Managing Expectations” (with Y. K. Lu and E. S. Pasten), Journal of Money, Credit and Banking, 40 (8), December (2008), 1625–1666. FELIX KUBLER, Professor of Financial Economics, University of Zurich. “Stationary Equilibria in Asset-Pricing Models With Incomplete Markets and Collateral” (with K. Schmedders), Econometrica, 71 (2003), 1767–1795. “Observable Restrictions of General Equilibrium Models With Financial Markets,” Journal of Economic Theory, 110 (2003), 137–153. “Approximate versus Exact Equilibria in Dynamic Economies” (with K. Schmedders), Econometrica, 73 (2005), 1205–1235. “Pareto Improving Social Security Reform When Financial Markets Are Incomplete” (with D. Krueger), American Economic Review, 96 (2006), 737– 755. “Borrowing Costs and the Demand for Equity Onver the Life Cycle” (with S. J. Davis and P. Willen), Review of Economics and Statistics, 88 (2006), 348– 362. “Approximate Generalizations and Computational Experiments,” Econometrica, 75 (2007), 967–992. IGNACIO N. LOBATO, Professor of Economics, ITAM. “A Nonparametric Test for I(0)” (with P. Robinson), Review of Economic Studies, 65 (1998), 475–495. “A Semiparametric Two-Step Estimator for a Multivariate Long Memory Model,” Journal of Econometrics, 90 (1999), 129–153. “Testing That a Dependent Process Is Uncorrelated,” Journal of the American Statistical Association, 96 (2001), 1066–1076. “Consistent Estimation of Models Defined by Conditional Moment Restrictions” (with M. A. Dominguez), Econometrica, 72 (2004), 1601–1615. “Efficient Wald Test for Fractional Unit Roots” (with C. Velasco), Econometrica, 75 (2007), 575–589. “An Automatic Portmanteau Test for Serial Correlation” (with J. C. Escanciano), Journal of Econometrics, 151 (2009), 140–149.
967 GEORGE LOEWENSTEIN, Herbert A. Simon Professor of Economics and Psychology, Carnegie Mellon University. “Anomalies in Intertemporal Choice: Evidence and an Interpretation” (with D. Prelec), Quarterly Journal of Economics, 107 (1992), 573–597. “Biased Judgments of Fairness in Bargaining” (with L. Babcock, S. Issacharoff, and C. Camerer), American Economic Review, 85 (1995), 1337–1343. “Labor Supply of New York City Cabdrivers: One Day at a Time” (with C. Camerer, L. Babcock, and R. Thaler), Quarterly Journal of Economics, 112 (1997), 407–441. “Projection Bias in Predicting Future Utility” (with T. O’Donoghue and M. Rabin), Quarterly Journal of Economics, 118 (2003), 1209–1248. “Neuroeconomics: How Neuroscience Can Inform Eeconomics” (with C. Camerer and D. Prelec), Journal of Economic Literature, 43 (2005), 9– 64. PABLO ANDRES NEUMEYER, Professor of Economics, Universidad Torcuato Di Tella. “Seignorage and Inflation: The Case of Argentina” (with M. Kiguel), Journal of Money Credit and Banking, 27 (1995), 672–682. “Currencies and the Allocation of Risk: The Welfare Effects of Monetary Union,” American Economic Review, 88 (1998), 246–259. “Inflation-Stabilization Risk in Economies With Incomplete Asset Markets,” Journal of Economic Dynamics and Control, 23 (1998), 371–391. “The Time Consistency of Optimal Fiscal and Monetary Policies” (with F. Alvarez and P. J. Kehoe), Econometrica, 72 (2004), 541–567. “Business Cycles in Emerging Economies: The Role of Interest Rates” (with F. Perri), Journal of Monetary Economics, 52 (2005), 345–380. JOHN QUIGGIN, Australian Research Council Professorial Fellow, University of Queensland. “A Theory of Anticipated Utility,” Journal of Economic Behaviour and Organisation, 3 (1982), 323–343. Generalized Expected Utility Theory: The Rank-Dependent Expected Utility Model. Amsterdam: Kluwer-Nijhoff (1995). “Convergence in GDP and Living Standards: A Revealed Preference Approach” (with S. Dowrick), American Economic Review, 67 (1997), 41–64. “A State-Contingent Production Approach to Principle-Agent Problems With an Application to Point-Source Pollution Control” (with R. G. Chambers), Journal of Public Economics, 70 (1998), 441–472. Production Under Uncertainty: The State-Contingent Approach (with R. G. Chambers). New York: Cambridge University Press (2000).
968 “The Risk Premium for Equity: Implications for the Proposed Diversification of the Social Security Trust Fund” (with S. Grant), American Economic Review, 72 (2002), 1104–1115. KLAUS M. SCHMIDT, Professor of Economics, University of Munich. “Reputation and Equilibrium Characterization in Repeated Games With Conflicting Interests,” Econometrica, 61 (1993), 325–351. “Option Contracts and Renegotiaion: A Solution to the Hold-up Problem” (with G. Nöldeke), Rand Journal of Economics, 26 (1995), 163–179. “Managerial Incentives and Product Market Competition,” Review of Economic Studies, 64 (1997), 191–214. “A Theory of Fairness, Competition, and Cooperation” (with E. Fehr), Quarterly Journal of Economics, 114 (1999), 817–868. “Discrete-Time Approximations of the Holmström–Milgrom Brownian Motion Model of Intertemporal Incentive Provision” (with M. Hellwig), Econometrica, 70 (2002), 2225–2264. “Fairness and Contract Design” (with E. Fehr and A. Klein), Econometrica, 75 (2007), 121–154. T. PAUL SCHULTZ, Malcolm K. Brachman Professor of Economics, Yale University. “The Distribution of Personal Income,” Joint Economic Committee, Congress of the United States. Washington: GPO (1964). “Love and Life Between the Censuses” (with M. Nerlove), in Structural Equations Models in the Social Sciences, ed. by A. Goldberger and O. D. Duncan. New York: Seminar Press (1973). “Education Investments and Returns,” in Handbook of Development Economics, Vol. 1, ed. by H. Chenery and T. N. Srinivasan. Amsterdam: NorthHolland (1988), 543–630. “Testing the Neoclassical Model of Family Labor Supply and Fertility,” Journal of Human Resources, 25 (1990), 599–634. “Investments in the Schooling and Health of Women and Men: Quantities and Returns,” Journal of Human Resources, 28 (1993), 694–734. “School Subsidies for the Poor: Evaluating the Mexican Progresa Poverty Program,” Journal of Development Economics, 74 (2004), 199–250. YOON-JAE WHANG, Professor of Economics, Seoul National University. “Tests of Specification for Parametric and Semiparametric Models” (with D. W. K. Andrews), Journal of Econometrics, 57 (1993), 277–318. “Consistent Bootstrap Tests of Parametric Regression Functions,” Journal of Econometrics, 98 (2000), 27–46.
969 “Consistent Testing for Stochastic Dominance Under General Sampling Schemes” (with O. Linton and E. Maasoumi), Review of Economic Studies, 72 (2005), 735–765. “A Quantilogram Approach to Evaluating Directional Predictability” (with O. Linton), Journal of Econometrics, 141 (2007), 250–282. “Testing for Stochastic Monotonicity” (with S. Lee and O. Linton), Econometrica, 77 (2009), 585–602. “Testing for Non-Nested Conditional Moment Restrictions via Unconditional Empirical Likelihood” (with T. Otsu and M. Seo), Journal of Econometrics, (2010, forthcoming).