Measuring Capital in the New Economy
Studies in Income and Wealth Volume 65
National Bureau of Economic Research Con...
46 downloads
1016 Views
3MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
Measuring Capital in the New Economy
Studies in Income and Wealth Volume 65
National Bureau of Economic Research Conference on Research in Income and Wealth
Measuring Capital in the New Economy
Edited by
Carol Corrado, John Haltiwanger, and Daniel Sichel
The University of Chicago Press Chicago and London
C C is chief of the Industrial Output Section in the Division of Research and Statistics at the Federal Reserve Board. J H is professor of economics at the University of Maryland and a research associate of the National Bureau of Economic Research. D S is assistant director in the Division of Research and Statistics at the Federal Reserve Board.
The University of Chicago Press, Chicago 60637 The University of Chicago Press, Ltd., London © 2005 by the National Bureau of Economic Research All rights reserved. Published 2005 Printed in the United States of America 14 13 12 11 10 09 08 07 06 05 1 2 3 4 5 : 0-226-11612-3 (cloth) “Information Processing Equipment and Software in the National Accounts” was produced as a work of the U.S. government. No copyright is claimed for this contribution. Library of Congress Cataloging-in-Publication Data Measuring capital in the new economy / edited by Carol Corrado, John Haltiwanger, and Daniel Sichel. p. cm. — (Studies in income and wealth ; v. 65) Papers presented at a conference at the Federal Reserve Board in Washington, D.C., held in Apr. 2002 by the Conference on Research in Income and Wealth of the National Bureau of Economic Research. Includes bibliographical references and index. 0-226-11612-3 (alk. paper) 1. Capital—Statistical methods. 2. Capital investments—Statistical methods. 3. Capital productivity—Statistical methods. I. Corrado, Carol. II. Haltiwanger, John C. III. Sichel, Daniel E. IV. Conference on Research in Income and Wealth. V. Series. 106.3.714 vol. 65 [79.3] 330 s—dc22 [332/.041/0151 2005010524 o The paper used in this publication meets the minimum requirements of the American National Standard for Information Sciences—Permanence of Paper for Printed Library Materials, 39.48-1992.
National Bureau of Economic Research Officers Michael H. Moskow, chairman Elizabeth E. Bailey, vice-chairman Martin Feldstein, president and chief executive officer Susan Colligan, vice president for administration and budget and corporate secretary
Robert Mednick, treasurer Kelly Horak, controller and assistant corporate secretary Gerardine Johnson, assistant corporate secretary
Directors at Large Peter C. Aldrich Elizabeth E. Bailey John H. Biggs Andrew Brimmer John S. Clarkeson Don R. Conlan George C. Eads Jessica P. Einhorn
Martin Feldstein Jacob A. Frenkel Judith M. Gueron Robert S. Hamada George Hatsopoulos Karen N. Horn Judy C. Lewent John Lipsky
Laurence H. Meyer Michael H. Moskow Alicia H. Munnell Rudolph A. Oswald Robert T. Parry Richard N. Rosett Marina v. N. Whitman Martin B. Zimmerman
Directors by University Appointment George Akerlof, California, Berkeley Jagdish Bhagwati, Columbia Ray C. Fair, Yale Michael J. Brennan, California, Los Angeles Glen G. Cain, Wisconsin Franklin Fisher, Massachusetts Institute of Technology Saul H. Hymans, Michigan Marjorie B. McElroy, Duke
Joel Mokyr, Northwestern Andrew Postlewaite, Pennsylvania Uwe E. Reinhardt, Princeton Nathan Rosenberg, Stanford Craig Swan, Minnesota David B. Yoffie, Harvard Arnold Zellner (director emeritus), Chicago
Directors by Appointment of Other Organizations Richard B. Berner, National Association for Business Economics Gail D. Fosler, The Conference Board Richard C. Green, American Finance Association Arthur B. Kennickell, American Statistical Association Thea Lee, American Federation of Labor and Congress of Industrial Organizations William W. Lewis, Committee for Economic Development
Robert Mednick, American Institute of Certified Public Accountants Angelo Melino, Canadian Economics Association Jeffrey M. Perloff, American Agricultural Economics Association John J. Siegfried, American Economic Association Gavin Wright, Economic History Association
Directors Emeriti Carl F. Christ Lawrence R. Klein Franklin A. Lindsay
Paul W. McCracken Peter G. Peterson Eli Shapiro
Arnold Zellner
Relation of the Directors to the Work and Publications of the National Bureau of Economic Research 1. The object of the NBER is to ascertain and present to the economics profession, and to the public more generally, important economic facts and their interpretation in a scientific manner without policy recommendations. The Board of Directors is charged with the responsibility of ensuring that the work of the NBER is carried on in strict conformity with this object. 2. The President shall establish an internal review process to ensure that book manuscripts proposed for publication DO NOT contain policy recommendations. This shall apply both to the proceedings of conferences and to manuscripts by a single author or by one or more coauthors but shall not apply to authors of comments at NBER conferences who are not NBER affiliates. 3. No book manuscript reporting research shall be published by the NBER until the President has sent to each member of the Board a notice that a manuscript is recommended for publication and that in the President’s opinion it is suitable for publication in accordance with the above principles of the NBER. Such notification will include a table of contents and an abstract or summary of the manuscript’s content, a list of contributors if applicable, and a response form for use by Directors who desire a copy of the manuscript for review. Each manuscript shall contain a summary drawing attention to the nature and treatment of the problem studied and the main conclusions reached. 4. No volume shall be published until forty-five days have elapsed from the above notification of intention to publish it. During this period a copy shall be sent to any Director requesting it, and if any Director objects to publication on the grounds that the manuscript contains policy recommendations, the objection will be presented to the author(s) or editor(s). In case of dispute, all members of the Board shall be notified, and the President shall appoint an ad hoc committee of the Board to decide the matter; thirty days additional shall be granted for this purpose. 5. The President shall present annually to the Board a report describing the internal manuscript review process, any objections made by Directors before publication or by anyone after publication, any disputes about such matters, and how they were handled. 6. Publications of the NBER issued for informational purposes concerning the work of the Bureau, or issued to inform the public of the activities at the Bureau, including but not limited to the NBER Digest and Reporter, shall be consistent with the object stated in paragraph 1. They shall contain a specific disclaimer noting that they have not passed through the review procedures required in this resolution. The Executive Committee of the Board is charged with the review of all such publications from time to time. 7. NBER working papers and manuscripts distributed on the Bureau’s web site are not deemed to be publications for the purpose of this resolution, but they shall be consistent with the object stated in paragraph 1. Working papers shall contain a specific disclaimer noting that they have not passed through the review procedures required in this resolution. The NBER’s web site shall contain a similar disclaimer. The President shall establish an internal review process to ensure that the working papers and the web site do not contain policy recommendations, and shall report annually to the Board on this process and any concerns raised in connection with it. 8. Unless otherwise determined by the Board or exempted by the terms of paragraphs 6 and 7, a copy of this resolution shall be printed in each NBER publication as described in paragraph 2 above.
Contents
1.
2.
Preface
ix
Introduction Carol Corrado, John Haltiwanger, and Daniel Sichel
1
Measuring Capital and Technology: An Expanded Framework Carol Corrado, Charles Hulten, and Daniel Sichel Comment: Edward C. Prescott A New Approach to the Valuation of Intangible Capital Jason G. Cummins
11
47 73
3.
The Valuation of Organization Capital Baruch Lev and Suresh Radhakrishnan Comment: Timothy F. Bresnahan
4.
Intangible Risk Lars Peter Hansen, John C. Heaton, and Nan Li Comment: Susanto Basu
5.
The Relation among Human Capital, Productivity, and Market Value: Building Up from Micro Evidence 153 John M. Abowd, John Haltiwanger, Ron Jarmin, Julia Lane, Paul Lengermann, Kristin McCue, Kevin McKinney, and Kristin Sandusky Comment: Katharine G. Abraham
111
vii
viii
6.
7.
8.
9.
10.
Contents
Measuring Organizational Capital in the New Economy Sandra E. Black and Lisa M. Lynch Comment: Kathryn Shaw Pharmaceutical Knowledge-Capital Accumulation and Longevity Frank R. Lichtenberg Comment: Eric J. Bartelsman R&D in the National Income and Product Accounts: A First Look at Its Effect on GDP Barbara M. Fraumeni and Sumiye Okubo Comment: Bronwyn H. Hall Communications Equipment: What Has Happened to Prices? Mark Doms Information-Processing Equipment and Software in the National Accounts Bruce T. Grimm, Brent R. Moulton, and David B. Wasshausen Comment: Barry Bosworth
205
237
275
323
363
11.
Growth of U.S. Industries and Investments in Information Technology and Higher Education 403 Dale W. Jorgenson, Mun S. Ho, and Kevin J. Stiroh Comment: Michael J. Harper
12.
Issues in the Measurement of Capital Services, Depreciation, Asset Price Changes, and Interest Rates W. Erwin Diewert Comment: Frank C. Wykoff
479
Remarks Robert E. Hall Baruch Lev Erik Brynjolfsson and Lorin Hitt
557
Contributors Author Index Subject Index
577 581 587
Preface
This volume contains revised versions of most of the papers and discussions presented at the Conference on Research in Income and Wealth entitled “Measuring Capital in the New Economy,” held at the Federal Reserve Board in Washington, D.C., on April 26–27, 2002. It also contains some material not presented at that conference. Funds for the Conference on Research in Income and Wealth are supplied by the Bureau of Economic Analysis, the Bureau of Labor Statistics, the Census Bureau, the Federal Reserve Board, and Statistics Canada. For this conference, additional financial support was provided by the National Science Foundation under grant SES-0137120. We are indebted to these organizations for their support. We thank Carol Corrado, John Haltiwanger, and Daniel Sichel, who served as conference organizers and editors of the volume. We also thank the Federal Reserve Board for hosting the conference and the NBER staff and University of Chicago Press editors for their assistance in organizing the conference and editing the volume. Executive Committee, April 2005 John M. Abowd Susanto Basu Ernst R. Berndt Carol A. Corrado Robert C. Feenstra John Greenlees John C. Haltiwanger Michael J. Harper Charles R. Hulten, chair
Ronald Jarmin J. Bradford Jensen Lawrence F. Katz J. Steven Landefeld Brent R. Moulton Mark J. Roberts Matthew Shapiro David W. Wilcox
ix
Introduction Carol Corrado, John Haltiwanger, and Daniel Sichel
As the new economy has developed, intangible assets and high-technology investments are playing an increasingly important role.1 These developments have raised many important questions about measurement, including how to treat intangible assets in economic accounts and whether we are accurately measuring newer, high-technology capital. Economic researchers, data providers, and policy analysts are interested in answering these questions because the answers can lead to better assessments of the economy’s long-run pace of economic growth and rate of technological advance, as well as to improved measures of national wealth. In April 2002, the Conference on Research in Income and Wealth (CRIW) of the National Bureau of Economic Research (NBER) held a conference at the Federal Reserve Board in Washington, D.C. The purpose was to develop new results in capital measurement and to explore their practical implications for the measurement community. This volume includes the papers presented at the conference, the discussants’ remarks, and summary remarks of some of the key conference participants. The papers and Carol Corrado is chief of the Industrial Output section in the Division of Research and Statistics at the Federal Reserve Board and a member of the executive committee of the Conference on Research in Income and Wealth. John Haltiwanger is professor of economics at the University of Maryland, a research associate of the National Bureau of Economic Research, and a member of the executive committee of the Conference on Research in Income and Wealth. Daniel Sichel is assistant director in the Division of Research and Statistics at the Federal Reserve Board. The views expressed in this introduction are those of the authors and should not be attributed to the Board of Governors of the Federal Reserve System or other members of its staff. 1. We use the term “new economy” broadly to encompass the ongoing changes in the economy, particularly those related to information technology.
1
2
Carol Corrado, John Haltiwanger, and Daniel Sichel
comments have undergone review and, in some cases, substantial revision since their presentation at the conference. The themes of the conference and this volume concern the challenges of how to measure physical capital accumulation and the contribution of intangible assets in an economy that is increasingly dominated by hightechnology capital and undergoing change brought about by advances in information technology. Just as Griliches (1994) argued that the fraction of output that is difficult to measure has been growing over time, we argue that the fraction of capital that is challenging to measure has been growing over time as well. Growth of high-tech capital has been extremely rapid in recent decades, and although data are sparse, the growth of intangible capital appears to have been rapid as well. Substantial attention has already been devoted to some of the measurement challenges regarding high-tech assets. For example, the focus on measuring real investment in computers and on studying the prices of semiconductors for computers has been considerable. Less attention, however, has been devoted to the measurement of the asset accumulation process for high-technology communications capital and for computer software capital; the latter is a major type of intangible asset that the Bureau of Economic Analysis (BEA) has only recently included in its capital accounts. This volume devotes needed attention to the measurement of communications equipment and software capital. This volume also devotes substantial attention to the measurement of intangible assets not now on the national balance sheet. Relatively little is known about the precise size of these assets, although recent work points to the potentially large magnitude of business spending on intangibles (Nakamura 2001; Lev 2001). Moreover, an extensive literature documents the profitability of business spending on research and development (R&D) and employer-provided training, suggesting that one can broadly argue that intellectual capital, organizational capital, human capital, and human resource practices are crucial assets of successful firms in today’s economy (Hall 2000, 2001a,b; Lev 2001). A growing body of research also emphasizes the complementarities between investment in high-tech capital, including software capital, and the creation of intangibles in the form of network externalities (Shapiro and Varian 1999) and organizational and firm-specific human capital (Brynjolfsson and Hitt 2000). Such complementarities likely exist both in dotcoms and in large manufacturing establishments, although the nature of the complementarity likely differs substantially across sectors and businesses. Although much work remains to be done in this literature, these types of intangible capital and their interaction with other forms of capital also receive considerable attention in this volume. Indeed, the desire to better understand this interaction is a key reason the conference focused on the challenges of measuring capital quality and intangibles. As noted
Introduction
3
above, these measurement challenges are similar in that their importance and relevance have grown with the spread of information technology. They are very distinct, however, in the questions addressed: “Are we capturing technological change in our price measures for high-tech assets?” versus “How should we determine the boundary between what is investment and capital and what is not?” Accordingly, the papers in this volume address diverse measurement issues. Without a doubt, some of the measurement issues highlighted here are controversial. The flourishing U.S. economy of the late 1990s—especially its booming stock market—has been partly responsible for generating the interest in better measuring and understanding the role of capital quality and intangibles in our modern economy. Indeed, a sizable literature attempts to determine how much of the gap between the market value of corporate securities and the value of corporate capital at replacement cost can be explained by intangibles. Although several papers in this volume are skeptical about using market valuations to measure intangibles, the perspective that important inferences about firm performance can be drawn from market valuations receives support in this volume. At the same time, other papers in the volume offer alternative strategies for measuring intangibles based on spending or earnings data. We recognize that it is vital to incorporate the perspective of the accounting community in addressing the measurement issues highlighted in this volume. One challenge is that even if a widespread consensus forms on the need to develop and improve measures of intangible capital, the measurement difficulties and the need for transparency in business accounting yield an inherent tension. Moreover, business accounting practices are currently undergoing scrutiny that stems from questionable accounting practices by some firms; often such practices are connected to the treatment of intangibles in business accounts. Although these developments raise a new set of challenges, we believe that it is essential to continue moving down the road toward a better understanding and fuller recognition of intangibles. At the conference and in this volume, the issues highlighted in this introduction are addressed from a wide variety of perspectives and approaches. In the remainder of this introduction, each paper in the volume is summarized. The summaries are not exhaustive, as the papers speak for themselves, and some perspective is provided on how the papers fit together, given the objectives of the conference. An Overview of the Papers The first paper in the volume, “Measuring Capital and Technology: An Expanded Framework,” provides guidance on what components of business spending should be included in measures of capital investment. Carol Corrado, Charles Hulten, and Daniel Sichel present an intertemporal
4
Carol Corrado, John Haltiwanger, and Daniel Sichel
model that defines capital investment as follows: Any outlay that is intended to increase future rather than current consumption should be treated as investment. Thus, spending on a host of intangibles—computerized databases, R&D, new copyrights and licenses, brand equity, and improved organizational structures—should, in principle, be counted as investment in economic accounts. The authors use their framework to identify and measure the components of business capital spending on intangibles. They find that, by the late 1990s, business fixed investment in intangibles may have been as large as investment in traditional equipment and structures. They indicate further that a significant portion of this investment is excluded from the existing investment figures in U.S. national income and product accounts (NIPAs) and suggest that a move to recognize intangible capital might alter our understanding of the factors determining economic growth. The next few papers explore conceptual and measurement issues for intangible capital. Jason G. Cummins in “A New Approach to the Valuation of Intangible Capital” raises both conceptual and measurement issues about the market valuation approach to measuring intangible capital that has appeared in the recent literature. In the market valuation approach, intangible capital is measured as the gap between the market value of a corporation’s securities and the replacement value of its physical assets. Cummins takes a different conceptual and empirical approach. He models intangibles within an adjustment-cost framework and then exploits data from analysts’ forecasts of earnings to test his approach and framework. Compared with the market valuation approach, Cummins’s approach finds a substantial, but much smaller, valuation of intangible capital. In “The Valuation of Organizational Capital,” Baruch Lev and Suresh Radhakrishnan take a different approach in measuring what they call “organizational capital.” They define organizational capital as the knowledge and structure used to combine inputs, such as physical capital and labor, to produce goods and services. Some businesses have more of this organizational capital than others—that is, they are able to produce more or morevaluable goods and services with a given amount of physical and human capital. Using this conceptual framework, Lev and Radhakrishnan develop alternative measures of organizational capital. The primary approach is to measure organizational capital as a residual, much like total factor productivity. Lev and Radhakrishnan test the information content of these measures of organizational capital by considering how financial markets value them. In “Intangible Risk,” Lars Peter Hansen, John C. Heaton, and Nan Li raise further questions about using financial market data to infer measures of intangible capital. They point out that drawing inferences about intangible capital from stock market returns must consider risk or offer some competing interpretation for the heterogeneity in stock market returns. Put
Introduction
5
differently, asset valuations reflect beliefs about not only the future prospects for firms but also the riskiness of the implied cash flows. Taking risk into consideration, they present evidence that important differences exist between the risks associated with investments in traditional measured capital and those associated with intangible capital. Other papers explore alternative direct measures of the factors that either compose or are closely connected to intangible capital. The first group focuses on human capital and human resource practices. The growing role of intangibles is conjectured to be linked to an increasing role of firmspecific human capital and associated human resource practices. That is, the tacit knowledge of employees is a key intangible asset, and successful businesses are those that manage the accumulation of such assets well through their human resource and organizational practices. For this important and difficult area of inquiry, the volume includes two papers. In the first paper, “The Relation among Human Capital, Productivity, and Market Value: Building up from Micro Evidence,” John M. Abowd, John Haltiwanger, Ron Jarmin, Julia Lane, Paul Lengermann, Kristin McCue, Kevin McKinney, and Kristin Sandusky explore new micro-based measures of human capital and the connections among human capital, productivity, and market value. Well established in the literature is that traditional human capital measures (for example, education) are important factors in production and that traditional measures capture only a small fraction of what we think of as human capital. Using newly developed, longitudinal, employer-employee matched data, the authors construct new, comprehensive measures of human capital and look at the connection between the new human capital measures and firm performance. Interestingly, but perhaps not surprisingly, the new measures help account for the observed variation in labor productivity across businesses. Even more interesting is that the new human capital measures also help account for the variation in market valuation across businesses after controlling for observed physical assets. The latter finding suggests that human capital is closely connected to intangible capital. In their paper “Measuring Organizational Capital in the New Economy,” Sandra E. Black and Lisa M. Lynch explore these closely related issues but use a different approach. Rather than focusing on the differences in human capital across businesses, Black and Lynch focus on the observed differences in human resource practices (including training) across businesses. Of course, the latter may have an effect on and interact with the differences in human capital across businesses, but the focus here is on the differences in the management of human capital across businesses. These authors review the burgeoning recent literature in this area and argue that human resource practices appear to be quite important in accounting for observed differences in firm performance. Moreover, the authors discuss recent related findings that the returns from the adoption of advanced
6
Carol Corrado, John Haltiwanger, and Daniel Sichel
technologies are improved by the adoption of state-of-the-art human resource practices. Industrial R&D has long been seen as one of the key factors underlying the differences in the products and processes across businesses. In other words, R&D has long been viewed as one of the critical factors determining the growth in total factor productivity. However, measuring R&D capital and its effects on firm performance is inherently difficult. One of the major challenges is having the appropriate data to capture the full effects of R&D on firm performance. In his paper “Pharmaceutical KnowledgeCapital Accumulation and Longevity,” Frank R. Lichtenberg takes a fresh look at these difficult issues in the pharmaceutical industry. Using rich micro-level data on the development of new drugs and data on the effects of such new drugs on human longevity, Lichtenberg generates novel results on the returns from R&D. In so doing, he raises several questions about the existing methods that are used to measure R&D capital and its effects on business performance. Barbara M. Fraumeni and Sumiye Okubo in their paper “R&D in the National Income and Product Accounts: A First Look at Its Effect on GDP” look at related issues, but from the top down instead of the bottom up. Fraumeni and Okubo discuss the data collected on R&D expenditures in the U.S. economy. They discuss how an R&D capital stock could be constructed for the U.S. NIPAs, and then they examine the relationship between their measures and existing GDP. This paper is important in its own right, but it is also important because it fulfills one objective of the conference—which was to spur consideration of how measuring intangible capital may change the existing statistics. Fraumeni and Okubo offer direct guidance on this subject. The next set of papers explores measurement issues for what might be called “high-tech” capital. High-tech capital is inherently difficult to measure because accurate measurements of quality change and of depreciation are particularly difficult. Tremendous progress has been made in the measurement of certain high-tech products such as computers and semiconductors. The focus of this set of papers is on the areas of high-tech capital in which progress has been slower—in particular, communications equipment and software. In “Communications Equipment: What Has Happened to Prices?” Mark Doms uses unique, newly developed data to study price and quality change for communications equipment. He found a high pace of quality change in communications equipment, particularly for the components used to create and provide business computer networks, such as high-end routers, switching equipment, and fiber optic equipment. The estimated pace of quality change is not as dramatic as that for computers and semiconductors but is still quite rapid and faster than official statistics suggest. Doms also points the way for improving the measurement of real output,
Introduction
7
prices, and productivity growth in this important industry by offering new ways of classifying its highly diverse components. One relatively recent change is that software is now treated as an investment good in the NIPAs. This change is important conceptually and has had significant implications for the national accounts. Treated as an investment good, software now contributes to the aggregate capital stock, and the difficult issues of measuring quality change and depreciation for capital goods must now be addressed for software. In “InformationProcessing Equipment and Software in the National Accounts,” Bruce T. Grimm, Brent R. Moulton, and David B. Wasshausen carefully document the measurement of high-tech assets in the national accounts, including the current treatment of software. The paper goes beyond summarizing the new treatment of software, however. The authors discuss the limitations and difficulties associated with the new methodology, outline areas in which the BEA hopes to make future progress, and discuss recent changes in the methods the BEA uses to measure information-processing equipment. The remaining papers cover areas that bridge more than one of the aforementioned topics. “Growth of U.S. Industries and Investments in Information Technology and Higher Education” by Dale W. Jorgenson, Mun S. Ho, and Kevin J. Stiroh explores the connections among productivity growth, human capital investment, and information-technology investment at the industry level. Much has been made of the contribution of information technology to the rapid and accelerated pace of productivity growth in the late 1990s. This paper uses industry-level data to explore the contribution of information technology at a more disaggregated level. Also, much has been made in the literature about skill-biased nature of the information-technology revolution. The rise in the demand for skilled workers in the past few decades and the associated changes in wage inequality have been the subjects of much research and debate. Using the growth accounting methodology that Jorgenson helped pioneer, the authors explore how all of these factors interact at the industry level. Their findings confirm the important contributions of both information technology and investment in human capital (via higher education) to productivity growth. Moreover, the paper raises several issues about measurement. In particular, the lack of comprehensive and integrated industry accounts makes the empirical exercises in this paper challenging, and the authors provide a useful perspective on the ways the official statistics could be improved along these lines. Finally, W. Erwin Diewert in “Issues in the Measurement of Capital Services, Depreciation, Asset Price Changes, and Interest Rates” provides a unified framework for the measurement of capital that addresses many of the issues and topics covered in the volume. Diewert reviews the standard methods for measuring capital, examines a range of alternatives for the
8
Carol Corrado, John Haltiwanger, and Daniel Sichel
treatment of depreciation, and explores the role of quality change and how intangible assets stocks could be measured. He uses this unified framework to discuss the pros and cons of the alternatives statistical agencies might face from both a conceptual and a practical point of view. In the context of the other papers in the volume, Diewert provides direct guidance on the various reasons for the difficulty in measuring intangible assets and high tech capital. Conclusion As in past NBER and CRIW conferences, this one stimulated a rich discussion by experts in the areas covered by the volume. For many of the published papers, the discussants’ comments are also included. These comments provide useful perspectives on the papers themselves and on the relevant broader issues. In addition, the conference included two panel discussions that provided important perspectives on the issues of the conference. The first panel assessed private-sector approaches to accounting for intangible assets and included Martin Baily, Ed Bersoff, Janamitra Devan, and Baruch Lev. The second roundtable discussed “next steps” for the measurement community and included Brian Kahin, Frederick Knickerborcker, Randall Kroszner, Steven Landefeld, Marilyn Manser, and Larry Slifman. The volume includes written remarks by noted contributors to the related literature who also participated actively in the conference. After welcoming remarks by Martin Feldstein and introductory remarks by Alan Greenspan, Robert Hall discussed his provocative work on the use of the stock market valuations to measure intangible capital. His remarks, published in this volume, offer an interesting perspective, especially in light of the sharp declines in stock prices in recent years. Besides presenting his paper on organizational capital, Baruch Lev participated actively in the panels and discussion, relating the perspectives of the accounting profession on the measurement of intangible capital. As discussed earlier, incorporating the perspective of the accounting profession is important, and his remarks are published in the volume. Finally, the volume includes written remarks by Erik Brynjolfsson and Loren Hitt. They have been active contributors to the literature on the measurement and contribution of intangibles to productivity, firm performance, and growth. One of the special features of this work is the initiation, supervision, and analysis of private surveys on these topics, yielding a unique perspective on these issues. Not surprisingly, no overall consensus emerged from the conference. Because capital measurement issues are difficult and the debate is ongoing, this lack of consensus is consistent with the conference tradition. The statistical, research, and policy communities will be grappling with the difficult issues in measuring capital for some time to come—and the problems
Introduction
9
associated with defining and measuring intangible capital and measuring high-tech capital will be at or near the top of the list. We believe that the papers in this volume will help define the issues, stimulate debate on these topics, and generate advances in both theory and practice. Indeed, an area of agreement at the conference was that the statistical agencies could use satellite accounts as a repository for the new measures being developed for some components of intangibles (for example, R&D capital and human capital). We hope that the statistical and research communities will use the rich insights presented here in this way and in other ways for years to come. The conference organizers and attendees thank those who made this conference a success and the NBER and CRIW volume possible: the National Science Foundation and the NBER and CRIW for financial support; the Federal Reserve Board for hosting the conference; and the NBER, especially Helena Fitz-Patrick, for assistance in compiling this volume.
References Brynjolfsson, Erik, and Loren Hitt. 2000. Beyond computation: Information technology, organizational transformation, and business performance. Journal of Economic Perspectives 14 (4): 23–48. Griliches, Zvi. 1994. Productivity, R&D, and the data constraint. American Economic Review 84 (1): 1–23. Hall, Robert. 2000. E-capital: The link between the stock market and the labor market in the 1990s. Brookings Papers on Economic Activity, Issue no. 2:73–118. ———. 2001a. The stock market and capital accumulation. American Economic Review 91 (5): 1185–1202. ———. 2001b. Struggling to understand the stock market. American Economic Review Papers and Proceedings 91 (2): 1–11. Lev, Baruch. 2001. Intangibles: Management, measurement, and reporting. Washington, DC: Brookings Institution Press. Nakamura, Leonard. 2001. What is the U.S. gross investment in intangibles? (At least) one trillion dollars a year! Federal Reserve Bank of Philadelphia Research Working Paper no. 01-15. Shapiro, Carl, and Hal Varian. 1999. Information rules. Cambridge, MA: Harvard Business School Press.
1 Measuring Capital and Technology An Expanded Framework Carol Corrado, Charles Hulten, and Daniel Sichel
1.1 Introduction and Background The last two hundred years have witnessed dramatic gains in the standard of living in the United States, driven by the technological innovations introduced by the Industrial Revolution and carried forward by the evolution of technology since then. This evolution is reflected both in the development of new products (electric lighting, automobiles, open-heart surgery) and in the improved efficiency of production processes (the assembly line, just-in-time inventory controls, computer-aided design). Growth economists have undertaken the important task of quantifying these developments and of sorting out the relative importance of the factors that drive sustained increases in real output. This effort has increased in recent years with the debate over the “new economy” and the question of whether living standards will continue to rise at the accelerated pace of the late 1990s. The “sources of growth” model has been the main empirical tool in tracking and explaining growth trends. This model, developed in the 1950s and 1960s by Solow (1956, 1957, 1960), Kendrick (1961), Denison (1962, 1964), Jorgenson and Griliches (1967, 1972), and others, allocates the growth rate of measured output to the growth rate of labor and capital inputs, each Carol Corrado is chief of the Industrial Output section in the Division of Research and Statistics at the Federal Reserve Board and a member of the executive committee of the Conference on Research in Income and Wealth. Charles Hulten is professor of economics at the University of Maryland, chairman of the Conference on Research in Income and Wealth, and a research associate of the National Bureau of Economic Research. Daniel Sichel is assistant director in the Division of Research and Statistics at the Federal Reserve Board. We thank Ed Prescott and Barry Bosworth for helpful comments on an earlier draft. The views expressed in this paper are those of the authors and should not be attributed to the Board of Governors of the Federal Reserve System or other members of its staff.
11
12
Carol Corrado, Charles Hulten, and Daniel Sichel
weighted by their share of output, and a residual factor associated with the efficiency with which output is produced from a given set of inputs (“total factor productivity,” or TFP). A large academic literature has evolved using this framework, reviewed recently in Hulten (2001), and TFP has become an official statistic produced regularly by the Bureau of Labor Statistics (BLS). An updated version of the BLS decomposition is shown in the upper half of table 1.1, which indicates that output per hour worked rose about 1.25 percentage points per year faster since 1995 than it did during the period of lackluster growth from 1973 until then. These figures have lent support to the “new economy” view about prospective growth trends. Although great progress has been made in applying the sources-ofgrowth framework to the analysis of economic growth, doubts have nevertheless been raised about our ability to understand and measure fully the fundamental sources of growth. For example, despite several decades of rapid technological advance in information technology, one of the originators of the sources-of-growth model, Robert Solow, famously remarked in 1987 that “you see the computer revolution everywhere except in the productivity data” (Solow 1987). Alan Greenspan observed about ten years later that many services industries displayed implausibly negative trends in Table 1.1
Annual change in labor productivity, nonfarm business sector 1948 to 1973
1973 to 1990
1990 to 1995
1995 to 2002
Labor productivity (percent)
2.9
1.4
1.5
2.7
Components (percentage points) Capital deepening IT equipment and software Other equipment and structures Labor composition Multifactor productivity R&Db
0.8 0.1 0.7 0.2 1.9 0.2
0.8 0.4 0.4 0.2 0.4 0.2
0.5 0.5 0.1 0.5 0.6 0.2
1.3 1.0 0.3 0.3 1.1 0.2
Excluding software 1.3
1.4
2.5
0.7 0.3 0.4 0.2 0.4
0.3 0.3 0.1 0.5 0.6
0.9 0.6 0.3 0.3 1.3
a
Labor productivity (percent) Components (percentage points) Capital deepening IT equipment Other equipment and structures Labor composition Multifactor productivity
Note: Components may not sum to totals because of rounding. IT information technology. Sources: Bureau of Labor Statistics, Multifactor Productivity Trends, 2001 (2003; http://www.bls.gov/ news.release/prod3.nr0.htm); unpublished update to Oliner and Sichel (2000, 2002). a Output per hour of all persons. b The research and development (R&D) component extends only to 2001.
Measuring Capital and Technology: An Expanded Framework
13
measured productivity despite being among the top computer-using industries.1 Also, Nordhaus (1997) concluded from his analysis of the history of lighting that official price and output data “miss the most important technological revolutions in history.” The recent debate over the accuracy of the consumer price index (CPI) has raised similar questions: Studies have shown that the failure to capture the full effect of improvements in product quality and the benefits of new goods and services created an upward bias in the CPI of more than 0.5 percentage point (Lebow, Roberts, and Stockton 1994; Advisory Commission to Study the Consumer Price Index 1996; Shapiro and Wilcox 1996) and implied a corresponding downward bias in the rate of increase in the consumption component of total real output. Some of the foregoing concerns can be traced to problems inherent in the data used to study economic growth, but many key data concerns have been addressed in relatively recent work at the major statistical agencies. These include, among others, the renewed emphasis on quality change in the measurement of prices at the BLS, the capitalization of software expenditures by the Bureau of Economic Analysis (BEA), and the expanded coverage of service industries by the Census Bureau. However, no parallel theoretical advance has emerged, within or beyond the sources-of-growth framework, to guide further empirical advances and tell us what we need to know to understand economic growth in the “new economy.” The first goal of this paper is to expand the conceptual framework of the sources-of-growth model by linking it to a variant of the standard model of intertemporal choice developed in Hulten (1979). In the expanded framework, the determination of what expenditures are current consumption and what are capital investment is governed by consumer utility maximization, and any outlay that is intended to increase future rather than current consumption is treated as a capital investment. When this deferredconsumption rule is applied to one of the most important “new economy” questions—whether business intangible outlays and knowledge input should be expensed or capitalized in national accounting systems—an unambiguous answer is obtained: there is no basis from the consumers’ point of view for treating investments in intangible capital differently from investments in plant and equipment, or tangible capital.2 1. Chairman Greenspan’s concerns about the measured productivity trends in services industries were first expressed in remarks at a Federal Open Market Committee meeting in late 1996 in regard to a staff analysis of disaggregated productivity trends (Corrado and Slifman 1999). A BLS study reached a similar conclusion (Gullickson and Harper 1999). The observation that many of the services industries that had negative productivity trends were among the top computer-using industries owes, at least in part, to Triplett (1999); see also Stiroh (1998) and Bosworth and Triplett (2003). 2. The focus of this paper is on the macroeconomic debate over role of intangible capital in the growth process. There is a parallel debate in the financial accounting literature over the expensing of intangibles (Lev 2001; Blair and Wallman 2000). Although the objectives of
14
Carol Corrado, Charles Hulten, and Daniel Sichel
We then turn to the question of how much difference the theoretically appropriate treatment of intangible capital might make to the productivity estimates shown in table 1.1. Competing approaches have emerged in the current literature on the measurement of intangibles. One prominent approach uses the value of securities, primarily stocks, to infer the quantity of intangible capital held by U.S. corporations to help interpret recent changes in productivity (R. Hall 2000, 2001a,b).3 Our work, however, follows another branch of the literature. Rather than appealing to the stock market, we pull together disparate pieces of spending data and related evidence to gauge the plausible magnitude of the additional business investment (and thus gross domestic product [GDP] and output per hour) if expenditures on intangibles, or knowledge capital, are treated symmetrically with investments in traditional fixed capital. The estimates we develop build on the studies by Nakamura and others who have examined the undervaluation of measured business investment and capital in the late 1990s (Nakamura 1999, 2001, 2003; Brynjolffson and Yang 1999; Brynjolffson, Hitt, and Yang 2002; McGratten and Prescott 2000), as well as earlier work sponsored by the Organization for Economic Cooperation and Development (OECD; OECD Secretariat 1998). Our effort differs from previous work, however, by applying the theoretical classification and accounting principles that follow from the expanded framework we introduce in the next section. Not only does this framework lead us to cast a wider net to identify the possible components of business investment in intangibles, but it also leads us to distinguish spending flows that generate relatively long-lasting revenue streams from those whose returns dissipate too quickly to count the associated asset as fixed capital. The latter consideration has not been an explicit aspect of others’ efforts to gauge the plausible undervaluation of business investment. We also are careful to separate assets that are already included in the national accounts from those that are not. All told, our framework for the economic measurement of capital suggests that, if business intangibles are fully recognized in national accounting systems, the move will significantly change measures of economic activity. We estimate that business spending on long-lasting knowledge capital—including intangibles broadly—grew relative to other major components of aggregate demand during the 1990s. As a result, our estimates show that, by the end of the decade, business fixed investment in intangibles was at least as large as business investment in traditional, tangible growth accounting differ from those of financial accounting, the macroeconomic symmetry principle applies at the firm level as well. 3. The link between productivity and stock market performance has been examined in earlier literature (e.g., Baily 1981; B. Hall and R. Hall 1993), and financial data have long been used to value R&D and patent assets to construct intangible stocks at the firm level (e.g., Griliches 1981; Cockburn and Griliches 1988; B. Hall 1993).
Measuring Capital and Technology: An Expanded Framework
15
capital. When the unrecognized portion of this spending is viewed in relation to existing measures, our framework portrays the U.S. economy as having had higher gross private saving and, under plausible assumptions, fractionally higher average annual rates of change in real output and real output per hour, particularly from 1995 to 2002. 1.2 Theory 1.2.1 The Production Function Approach to Growth Accounting Contemporary growth accounting is organized around the concept of the aggregate production function. Aggregate real output is assumed to be related to inputs of labor and capital via an aggregate production function, with provision for changes in the productivity of the inputs. When efficiency change has the Hicks-neutral form, the production function can be expressed as (1)
Qt At F(Kt, Lt ),
where Qt denotes real output, Kt and Lt are capital and labor, and At is an index of the level of TFP. In econometric studies of growth, the production function is given a specific parametric form, and the parameters of F () are then estimated using a variety of techniques. In the index-number (nonparametric) approach of Solow (1957) and Jorgenson and Griliches (1967), the growth rate of output is equal to the shared-weight growth rates of labor and capital: (2)
gQ sK gK sL gL gA .
(The g terms are growth rates, and the s terms are factor shares.) Under constant returns to scale and marginal-cost pricing, Solow showed that the factor shares are equivalent to output elasticities, and the term gA was associated with a shift in the production function in equation (1) (illustrated as the move from a to c in figure 1.1), while sK gK sL gL was a movement along the function (c to b in the figure). Each item in equation (2) can, in principle, be estimated from national accounting data, except for the growth rate of TFP, gA , which must be inferred as a residual—Abramovitz’s famous “measure of our ignorance.” The sources-of-growth equation is the basis for the estimates in table 1.1, which are expressed in labor-productivity terms: gQ – gL sK (gK – gL ) gA .4 The estimates in this table highlight the slowdown in productivity growth in the early 1970s and the pickup in the mid-1990s. The table also reveals the important role played by TFP in these swings in productivity growth. 4. This variant follows from equation (2) under constant returns to scale, in which case sK sL 1 and gL is subtracted from both sides of the equation.
16
Carol Corrado, Charles Hulten, and Daniel Sichel
The acceleration in labor productivity since 1995 has generated the “new economy” view that the U.S. economy has entered an era of higher productivity growth. As can be seen in the table, the 1.25 percentage point improvement in labor productivity growth after 1995 was driven both by a pickup in TFP and by the increased capital deepening of information technology (IT) equipment and software.5 The potential importance of investment in intangible assets is borne out by the lower panel of table 1.1, in which the sources of growth estimates exclude the effect of capitalized software (i.e., they exclude investment in software as a component of both output and capital input). Here the pickup in labor productivity growth is seen to be only about 1 percentage point, about one-quarter percentage point less than when software is included. These results for software point to the potential importance of capitalizing other intangibles, like R&D. In the treatment shown in the upper panel of table 1.1 (the part taken from the BLS release), R&D is added as a “memo” item explaining a portion of the residual; it does not play an explicit role as a capital input. Unfortunately, the traditional sources-ofgrowth framework treats capital and labor inputs as determined outside of the framework and therefore does not provide guidance as to whether to treat R&D as a memo item or as an investment good and thus as part of output. This problem is addressed in the following section by embedding the sources-of-growth framework in a more complete dynamic model in which investment decisions are made explicit. Our extended framework shows that no basis exists for treating intangible capital differently from traditional forms of fixed capital. Indeed, the asymmetrical treatment of the two types of capital could have the effect of suppressing some of the most dynamic factors driving economic growth.6 1.2.2 Capital in a Complete Model of Economic Growth The sources-of-growth model, derived from the production function as illustrated in figure 1.1, evolved as a period-by-period analysis of the factors determining output along the growth path of an economy. This model treats capital as predetermined and cannot fully describe the growth process because saving and investment are choice variables in a complete model of growth. Not only is this choice dimension important because it determines the quantity of capital available at each point in time, but it also determines what should be counted as capital. The answer to the question 5. Table 1.1 shows data on actual productivity growth through 2002. Because the table shows actual data, the averages shown over selected periods reflect changes in factor utilization as well as the underlying trend pace of growth. 6. Our analytical framework also highlights the shortcomings of the prevailing view in the accounting literature. Despite some dissenting views (e.g., Lev 2001), this literature holds that intangible investments should be expensed and not capitalized.
Measuring Capital and Technology: An Expanded Framework
Fig. 1.1
17
The production function
of whether intangibles should be treated as capital is therefore a matter of embedding the production function–based sources-of-growth analysis in a larger model of economic growth. Several options are available in this regard: the neoclassical growth model, the endogenous growth model, and optimal-growth theory. The latter is the most suitable because it deals directly with consumer saving behavior. We will therefore work with the variant of this model used in Hulten (1979) to endogenize capital in an expanded growth-accounting framework. Following standard intertemporal capital theory, we assume that optimal consumption in each period is determined by the maximization of an intertemporal utility function U(C1, . . . , CT ) subject to (a) the constraints of the technology represented by the production function in equation (1); (b) the capital accumulation equation expressed in its “perpetual inventory” form as (3)
Kt It (1 ) Kt1 ,
where is the rate of depreciation; (c) the constraint that the production of consumption and investment goods cannot exceed total output in any period (4)
p Qt Qt pCt Ct p It It p Lt Lt p Kt Kt
(this expression also serves as the annual product and income account of the “economy” portrayed by this model, which for simplicity is assumed closed); (d) the exogenously given initial and terminal quantities of capital, K 0 and KT ; and (e) the initial level and paths of labor input and TFP (and thus gL and gA ), which are all given. The resulting solution determines the path of consumption over time {C t∗}, as well as the paths of investment and capital stock. In view of equation (4), the optimal path {C t∗} determines
18
Carol Corrado, Charles Hulten, and Daniel Sichel
how output is divided between the production of consumption and investment goods and therefore is relevant to how intangibles ought to be treated.7 For purposes of exposition, we can restate the constraints on the optimization in a more compact form.8 The production function Ct It At F (Kt , Lt ), the initial and terminal stocks of capital K0 and KT , the accumulation condition Kt It (1 – )Kt–1, and the paths of At and Lt , can be expressed in equivalent form as (5)
({C1, . . . , CT }; {L1, . . . , LT }; {A1, . . . , AT }; K0, KT ) 0.
This is the intertemporal production possibility frontier, which indicates all combinations of the consumption vector {C1, . . . , CT }, including the optimal vector, that are possible given the vector of labor input, technology levels, and the initial and terminal stocks of capital. This form of the constraint reveals the endogenous role of capital in the growth process. Capital largely disappears when the constraints are expressed in this form, and it disappears altogether if the initial and terminal stocks of capital happen to be zero. The solution to the intertemporal optimization problem is shown graphically in figure 1.2 for the case of two time periods. The feasibility constraint is represented by the curve AB, and the intertemporal utility function U(C1, . . . , CT) is represented by the curves UU and UU. The optimal consumption plan is represented by the point a, and this point defines the maximum wealth of the economy. The line WW indicates the level of this wealth and has the form ∑ ( ptCCt∗), t 1, . . . T. (6) W0,T (1 i)t In equation (6), the nominal rate of discount, i, is assumed to be constant over time for simplicity of exposition, and the initial and terminal stocks of capital are set to zero. The optimal point, a, is an explicit function of labor input and level of technology in each period. Capital is implicit in the optimal solution, because A – C1 units of consumption are forgone in period 1, and the resources freed up by this abstinence are used to make capital goods, which are then used up in production in period 2. Capital is, in effect, an intermediate intertemporal good. The relative roles of capital formation and technical change can be explored using the following thought experiment: What would have been the 7. The Solow-Swan neoclassical growth model (Solow 1956; Swan 1956) provides an alternative description of how output is split between consumption and investment. In that model, investment is assumed to be a fixed proportion of output (It sIQt ). A different growth path emerges when output is split between consumption and investment according to this rule, but the general conclusion about intangible capital obtained in this section still holds. 8. We also ignore the complication of chain weighting to keep the exposition simple.
Measuring Capital and Technology: An Expanded Framework
Fig. 1.2
19
The intertemporal framework
outcome had technology not increased from AB to AB, with labor held constant? The production possibility frontier in the case of zero technical change is shown as the curve AB, and the optimal solution as b. The effect of capital formation on the optimal consumption plan (in the absence of technical change) is represented by the notional jump from A to b, and the effect of technical change (including the effect of the induced capital accumulation) as the jump from b to a. The latter is the “wealth effect” of technical change much discussed in recent years, but note that it arises only from unexpected increases in the level of technology. Expected increases in technology are already embedded in the long-run consumption plan of the optimizer (that is, Ct is invariant to expected technical change). 1.2.3 Implications of the Intertemporal Framework for Defining and Measuring Capital The simple intertemporal framework of figure 1.2 has an important implication for the treatment of intangible capital in a set of economic accounts. Figure 1.2 makes clear that any use of resources that reduces current consumption in order to increase it in the future (for example, the movement along AB from the point A on the horizontal axis to the optimal point a) qualifies as an investment. Figure 1.2 thus argues for symmetric treatment of all types of capital; that is, in national accounting systems, investments in knowledge capital should be placed on the same footing as that of investments in plant and equipment. This requirement is of rather broad scope. It includes all investments in human capital (not just outlays by government and not-for-profit institutions on education), research and development expenditures, and indeed any expenditure in which a business
20
Carol Corrado, Charles Hulten, and Daniel Sichel
devotes resources to projects designed to increase future rather than current output, whether it is intangible or tangible. Financial accounting practice continues to ignore this principle, and national income and wealth accounting is only just beginning to incorporate it into practice. Of course, many practical difficulties arise in implementing the symmetry principle, and these difficulties are one reason that financial accountants prefer to expense intangibles. Much intangible capital investment occurs within the company, household, or government unit that has the intellectual property right to the capital, and no arm’s-length valuation of the investment exists. Moreover, the appropriability of property rights and the spillover of externalities also present problems. However, practical difficulties do not invalidate the underlying theoretical principle of symmetry. From a conceptual standpoint, it does not matter at all whether an asset is self-constructed or not, nor does the presence of externalities or market pricing power matter in the theoretical framework of figure 1.2: the intertemporal utility function is based on the final result of the production process—consumption. The consumption possibility frontier () incorporates all externality effects, monopolistic market structures, and selfconstructed assets. No theoretical basis exists for treating one type of capital differently from another simply because one type is harder to measure accurately. Some of the practical difficulties will become apparent in section 1.3 of this paper, in which we attempt to estimate the magnitude of some of the main types of business intangible investments. However, before turning to this task, we will review some of the main implications of our theoretical analysis for national economic accounting. 1.2.4 Implications of Symmetry for Economic Accounting The symmetry principle implies that the production function, as formulated in equation (1), is an incomplete representation of the production possibilities of an economy and must be expanded to accommodate the input and output of intangibles. Written in implicit form, the expanded production function can be expressed as (7)
F(Ct , It , Nt ; Lt , Kt , Rt , A∗t ) 0,
where investment in intangibles is denoted by Nt ; the accumulated stock of intangibles (adjusted for depreciation) is denoted by Rt ; and the asterisk on the efficiency term, At∗; distinguishes it from the term in equation (1). The intertemporal production possibility frontier also must be reinterpreted in terms of the expanded concept of capital and correctly specified efficiency term. The reformulation of the production possibility set has implications for the construction of national accounts and the accounting for economic growth. The GDP identity in equation (4) treats most intangibles as intermediate inputs in which the value of the spending, p Nt Nt , is just matched by
Measuring Capital and Technology: An Expanded Framework
21
the value produced each period (ignoring foreign trade). As a result, the two flows cancel, and aggregate GDP is unaffected by the size and relative growth of p Nt Nt . When an intangible asset is reclassified as fixed capital, the value of the spending is treated as an addition to GDP, and p Nt Nt is added to the left-hand side of the GDP identity, equation (4). An adjustment must also be made to the input side of the GDP identity. When intangible capital is treated as a capital input, its user cost value, pRt Rt , is added to the right-hand side of equation (4). When both the user cost value and the investment value of intangibles are included in GDP, the national accounting identity becomes (8)
pCt Ct p It It p Nt Nt p Lt Lt p Kt ∗Kt pRt Rt .
For simplicity of exposition, we assume that the p It It and p Kt ∗K t refer exclusively to tangible capital, whereas the U.S. national accounts include some intangibles in It . The notation p Kt ∗ denotes the user cost of tangible capital input when intangibles are recognized as in equation (8). It differs from the corresponding user cost in equation (4), which may include income from intangible capital. The two user costs are connected by the expression (9)
p Kt Kt p Kt ∗Kt p Rt Rt p Nt Nt .
This expression not only indicates that the conventional measure of capital income has a potential bias, but it also reveals an important feature of the expanded accounts. The symmetry principle indicates that the value of intangible production, p Nt Nt , must be added to the conventional GDP identity in equation (4) to arrive at the correct expression in equation (8); this raises the issue of where this additional value of output should appear on the right-hand (income) side of equation (8). The treatment of intangibles on the income side of the accounts is subtle because the original estimate of capital income in equation (4), pKt K t , is measured by the sum of all property-type income (interest, dividends, retained earnings, taxes, and depreciation). This list accounts for the nonlabor payments accruing to both tangible and intangible capital—that is, to pKt ∗K t pRt R t . Where, then, would intangible investment, pNt Nt , appear on the income side of the accounts? The answer is that intangible investments are reflected as retained earnings that are uncounted in the conventional framework. Specifically, because intangibles are expensed in the conventional framework, they are subtracted from revenue to get earnings. Because intangibles would not be subtracted from revenue in the expanded framework, retained earnings are higher than in the conventional framework.9 Thus, the symmetry principle is not just about uncounted output, but also about uncounted income accruing to capital. 9. Moulton, Parker, and Seskin (1999) describe this accounting for the recognition of software in the NIPAs. See also Fraumeni and Okubo (chap. 8 in this volume) and B. Hall (chap. 8 comment in this volume).
22
Carol Corrado, Charles Hulten, and Daniel Sichel
The expanded framework for growth accounting presents a somewhat different view of the economy than the approach that ignores intangibles. Comparing the GDP identities in equations (4) and (8) reveals that the latter is greater by p Nt Nt . The rate of saving and investment and the relative shares of capital and labor in GDP are also affected. Labor’s share in the “old” view is sL ( pLL)/( p LL p KK ), which becomes s∗L ( p LL)/( p LL p KK p NN ) when intangibles are recognized as asset. (For ease of exposition, time subscripts have been dropped, and the asterisk is used to denote the “true” income share.) The two shares are related by a factor of proportionality (10)
( pCC p II ) , ( pCC p II p NN)
which is less than one. The basic result is that labor’s “new” share is smaller, that is, s∗L sL. Since both the old and new shares must sum to one, capital’s income share must be larger when intangibles are recognized as being capital. Put differently, the “new” view includes the return to these forms of investment. A similar analysis applies to the rates of saving and consumption. Consumption’s true share is s ∗C ( pCC )/( pCC p II p NN ), and the mismeasured share is sC ( pCC )/( pCC p II ); they are related by the same proportionality factor as above, that is, s∗C sC . Consumption’s share is smaller when intangibles are treated as capital, implying that the rate of saving and investment is correspondingly higher. This result is relevant in light of concerns about the low rate of saving in the U.S. economy: existing measures of saving exclude much of the investment in knowledge capital that defines the modern economy. 1.3 The Scale of Business Investment In this section, we first identify and group the items commonly thought to represent private business spending on intangibles. We next review the available data sources and develop estimates of outlays on intangibles for three periods chosen to illustrate the growth in intangibles in the 1990s. The intertemporal model is then used to guide the determination of which (or what portion) of the items we have identified as business intangibles should be categorized as business fixed investment in national accounting systems. 1.3.1 Identifying and Estimating Business Spending on Intangibles As illustrated in table 1.2, we group the various items that constitute the knowledge capital of the firm into three broad categories: computerized information, innovative property, and economic competencies. The table indicates, in general terms, what type of knowledge capital is included in
Measuring Capital and Technology: An Expanded Framework Table 1.2
23
Business intangibles, by broad group
Name of group
Type of knowledge capital
Current status in the NIPAs
Computerized information
Knowledge embedded in computer programs and computerized databases
Major component, computer software, is capitalized
Innovative property
Knowledge acquired through scientific R&D and nonscientific inventive and creative activities
Most spending for new product discovery and development is expenseda
Economic competencies
Knowledge embedded in firmspecific human and structural resources, including brand names
No items recognized as assets of the firm
a
Two small components—oil and gas exploration, and architectural and engineering services embedded in structures and equipment purchases—are included in the NIPA business fixed investment.
each broad group and how each type is currently treated in the NIPAs.10 The major component of computerized information, computer software, already is included as business fixed investment. The other components, which include the knowledge acquired by businesses in their development of new or improved products, processes, or economic competencies, are not generally recognized as assets of the firm in the U.S. NIPAs or in other national accounting systems. Table 1.3 presents estimates of the size of outlays by businesses on these broad categories of intangibles. As indicated by the numbered rows of the table, these estimates have been built from nine types of intangible assets that have been identified and grouped according to our three basic categories. The table summarizes the availability of data that can be used to measure business spending on each item and then presents our spending estimates for the late 1990s, that is, from 1998 to 2000; spending rates for periods five and ten years earlier also are shown to help illustrate trends by detailed asset type during the decade.11 In the discussion that follows, we review the data sources for each item and assess how the available information can be used to broaden measures of business investment to more fully encompass each of the categories of knowledge capital shown in table 1.2. Computerized Information Computerized information reflects knowledge embedded in computer programs and computerized databases. When computer software was 10. Our classification is similar, but not identical, to groupings developed in studies sponsored by the OECD (see OECD Secretariat 1998 and Khan 2001) and used by Lev 2001. 11. The spending estimates summarized in the table have been developed as consistent, annual time series from 1988 to 2002. In much of the analysis that follows, we focus on the figures for the late 1990s.
Own use likely is captured in NIPA software measures; data from the Services Annual Survey (SAS) suggest that the purchased component is small.b
2. Computerized databases
Mainly R&D in manufacturing, software publishing, and telecom industries. The census collects data on behalf of the National Science Foundation (NSF). Industrial R&D data are available from the early 1950s and cover work in the physical sciences, the biological sciences, and engineering and computer science (excl. geophysical, geological, artificial intelligence, and expert systems research).
Mainly R&D in mining industries. a. Mineral exploration, Census of Mineral Industries and NIPAs. b. Other geophysical and geological exploration R&D in mining industries, estimated from census data.c
Mainly R&D in information-sector industries (excl. software publishing). No broad statistical information, proxied by: a. Development costs in the motion picture industryd b. Development costs in the radio and television, sound recording, and book publishing industries are crudely estimated to be double the new product development costs for motion pictures. (No estimate for the arts is included.)
Mainly R&D in finance and other services industries. No broad statistical information, proxied by: a. New product development costs in the financial services industries, crudely estimated as 20 percent of intermediate purchases.e b. New architectural and engineering designs, estimated as half of industry purchased services (revenues of the industry as reported in SAS). c. R&D in social sciences and humanities, estimated as twice industry purchased services (revenues as reported in SAS).
3. Science and engineering R&D (costs of new products and new production processes, usually leading to a patent or license)
4. Mineral exploration (spending for the acquisition of new reserves)
5. Copyright and license costs (spending for the development of entertainment and artistic originals, usually leading to a copyright or license)
6. Other product development, design, and research expenses (not necessarily leading to a patent or copyright)
23 (12–35) 36 (18–54) 4 (2–6)
9 19 (9–28)
9 2
103
205 (165–245)
41
Innovative property
40
Covers expenses of software developed for a firm’s own use; based on NIPA data that include three components: own use, purchased, and custom software.
1. Computer software
1988–90
Computerized information
Comments on data availability and data sources
Data availability and estimated size of business spending on intangibles, by type of asset
Type of asset or spending
Table 1.3
38 (19–57) 42 (21–63) 5 (2–7)
14 27 (14–40)
10 2
123
260 (205–315)
69
70
1993–95
Estimated sizea
74 (37–111) 68 (34–102) 7 (4–11)
25 50 (25–75)
16 2
184
425 (325–525)
3
151
155
1998–2000
Broad surveys of employer-provided training were conducted by the Bureau of Labor Statistics (BLS) in 1994 and 1995.g a. Direct firm expenses (in-house trainers, outside trainers, tuition reimbursement, and outside training funds) b. Wage and salary costs of employee time in formal and informal training
No broad statistical information, and no clear consensus on scope. a. Purchased “organizational” or “structural” capital, estimated using SAS data on the revenues of the management consulting industry. b. Own-account component, estimated as value of executive time using BLS data on employment and wages in executive occupations.
8. Firm-specific human capital (costs of developing workforce skills, i.e., on-the-job training and tuition payments for job-related education)
9. Organizational structure (costs of organizational change and development; company formation expenses)
10.4 (9–12)
10.7 (9–13)
755 (625–900)
132 (65–220)
93 (45–155) 570 (475–675)
42
70
55 27
17
151 12 (6–19)
425 (350–515)
13
124 10 (5–15)
325 (270–390)
13.1 (11–16)
1,220 (1,005–1,465)
210 (105–345)
81
94
22
217 19 (9–28)
640 (525–785)
Note: Numbers in parentheses are ranges for the point estimates shown. a Billions of dollars, average annual rate. Totals and ranges for major categories are rounded to the nearest $5 billion dollars. b Refers to the subscription-type revenue (i.e., total revenue, excluding advertising sales, contract printing, and other) of the database and directory publishing industry reported in the 2000 SAS. c Output of the geophysical surveying and mapping services industry; Census of Mineral Industries quinquennial data were interpolated to obtain estimates for other years. d Data on the cost per release for Motion Picture Association of America (MPAA) members is applied to the number of releases by independent producers. Derived from statistics reported in the MPAA U.S. Economic Reviews and available at http://www.mpaa.org. e Intermediate purchases for finance industries (SICs 60–62, 67) from the Bureau of Economic Analysis’s GDP-by-industry data set. f The advertising data are from Bob Coen’s Insider’s Report, issued by Universal-McCann, available at http://www.mccann.com/insight/bobcoen.html. g Estimates for other years were derived from (1) the industry detail on per employee costs reported in Bureau of Labor Statistics surveys, and (2) trends in aggregate educational costs, industry employment, and industry employment costs.
Percent of GDP
Grand total
a. Purchases of advertising services; advertising expendituresf b. Outlays on market research, estimated as twice industry purchased services (revenues of the market and consumer research industry as reported in SAS).
7. Brand equity (advertising expenditures and market research for the development of brands and trademarks)
Economic competencies
26
Carol Corrado, Charles Hulten, and Daniel Sichel
recognized in the NIPAs in 1999, the move captured the estimated costs of software created by firms for their own use as well as purchases of prepackaged and custom software. The own-account estimates were developed from detailed occupational data on employment and wages in private industry, in conjunction with an estimate (50 percent) of the average time spent by individuals in the relevant occupations on “software development” (Parker and Grimm 1999). This method of estimating investment, though imprecise, is consistent with the framework of figure 1.2: Some uses of employee time (development) are investments, other uses are inputs to current production. Spending in this category thus reflects the current NIPA computer software estimates (which averaged more than $150 billion during 1998–2000) plus a small figure (about $3 billion) for computerized databases, an item not capitalized in the NIPAs. Spending on computerized databases is estimated from the Census Bureau’s Services Annual Survey (SAS). The recognition in the NIPAs of computer software as investment generally has been met with acceptance and praise. During the work leading up to the introduction of software in the accounts in 1999 and in subsequent work by the BEA and others, important lessons have been learned about how to handle the possible double-counting from bundling of assets as well as issues related to own-account production.12 Because intangibles are often bundled with fixed assets (e.g., Brynjolffson, Hitt, and Yang 2002) or constructed on own-account, the lessons learned from software should be quite valuable as efforts to measure other intangibles move forward. Innovative Property Although the innovative property category of intangibles includes the familiar R&D spending data as one of its components, it is broader. It reflects not only the scientific knowledge embedded in patents, licenses, and general know-how (not patented) but also the innovative and artistic content in commercial copyrights, licenses, and designs. The category thus encompasses what we term “nonscientific R&D” in addition to the familiar “scientific R&D” component. In contrast to scientific R&D, the magnitude and trajectory of the spending flow on nonscientific R&D are not very well measured. Nonetheless, we estimate that in the 1990s, nonscientific R&D spending was at least as large as scientific R&D spending. All told, our figures suggest that private businesses spent about $425 billion per year during 1998–2000 on investments in innovative property. The industrial R&D data that have been the subject of most of the research in the United States have been collected since the early 1950s by the Census Bureau for the National Science Foundation (NSF). These data are 12. Software is an important tool in R&D, and the conceptual overlap between figures for own-account software and the data on R&D expenditures must be confronted; this point is emphasized in the OECD’s work on intangibles.
Measuring Capital and Technology: An Expanded Framework
27
defined to include expenditures “on the design and development of new products and processes and on the enhancement of existing products and processes.” The included expenditures are restricted to activities carried on “by persons trained, either formally or by experience, in the physical sciences, the biological sciences, and engineering and computer science (but excluding geophysical, geological, artificial intelligence, and expert systems research).” As is consistent with this restriction, the NSF’s industrial R&D data mainly capture inventive activity by industries that employ these types of workers (high-tech, pharmaceutical and other manufacturers, software publishers, telecommunications service providers, and the like), who are estimated to have spent nearly $185 billion annually on R&D in the 1998– 2000 period. Adding an estimate of mining R&D (more than $15 billion) yields a point estimate for total spending on scientific R&D (items 3 and 4 on the table) of about $200 billion annually during 1998–2000. Relatively little is known about “nonscientific” R&D spending. Information-sector industries—book publishers, motion picture producers, sound recording producers, and broadcasters—as well as financial and other services industries routinely research, develop, and introduce new products. However, we have no broad survey data on the resources they devote to these activities. In the table, we identify two types of nonscientific R&D spending (table 1.3, lines 5 and 6), noting that new product investment by information-sector industries usually leads to an identifiable asset, such as a copyright or license, whereas the fruits of nonscientific R&D spending elsewhere usually do not. For the motion picture industry, trade association data suggest that the development costs of new movie releases averaged $25 billion per year during 1998–2000 (table 1.3, line 5a). This figure includes the actual costs incurred by the major motion picture producers and the estimated costs for independent producers, with the latter accounting for about half of the total. We have no comparable data for the production costs of new television programs, sound recordings, or books. However, the new film and new TV program development costs of four of the seven major U.S. film and TV producers/distributors who identify such costs on a comparable basis in their financial reports were $15 billion per year for the same period, an amount suggesting that the costs of developing new TV programs are nontrivial.13 As a result, development costs for new products in the broadcasting, sound recording, and book publishing industries (line 5b) are crudely estimated to be twice the new product development costs for motion pictures, a relationship that is roughly the same as that between the revenue of these industries and that of motion pictures as reported in the SAS. A range of plus or minus 50 percent is placed on this point estimate to indicate the high degree of uncertainty about this guess. Our estimates for other new product development, design, and research 13. The four companies are the Warner Bros. unit of Time-Warner, Disney, Fox, and MGM.
28
Carol Corrado, Charles Hulten, and Daniel Sichel
costs are also rudimentary guesses, again with a range of plus or minus 50 percent placed on the point estimates. Nakamura (2001) proxies new product development costs in the financial services industries as a proportion (half) of the noninterest expenses of banks and nondepository institutions. We broaden the coverage to include other financial institutions (security and commodity brokers and other financial investments and related activities), and for our proxy we use 20 percent of all intermediate purchases reported in the BEA’s data on gross output and value added by industry; this spending was about $75 billion annually during 1998–2000. Elsewhere in services, we estimate that spending on new architectural and engineering designs was nearly $70 billion during the same period and that R&D in the social sciences and humanities was about $7 billion, twice its purchased component from SAS revenues (to include an own-account portion). Economic Competencies Economic competencies represents the value of brand names and other knowledge embedded in firm-specific human and structural resources; it gathers the expenditures designed to raise productivity and profits (other than the software and R&D expenses classified elsewhere) and labels them “economic competencies.” We include three basic asset types in economic competencies: brand names, firm-specific human capital, and organizational structure. As indicated on lines 7 through 9 of table 1.3, we suggest that spending on these assets can be captured by measuring the costs of brand development; the costs of workforce training and education; and the costs of organizational change and development. Our raw tally of these flows places the spending on economic competencies at about $640 billion per year in the 1998–2000 period. This large spending category, however, is imprecisely estimated, and we place a wide range around our point estimate. Spending on brand development is represented by expenditures on advertising and market research (table 1.3, lines 7a and 7b) and encompasses the costs of launching new products, developing customer lists, and maintaining brand equity. Although advertising and market research are generally aimed at building a firm’s market share at the expense of its competitors, such spending is necessary for developing new brands and maintaining the value of existing brands. Data on advertising expenditures are available from Universal-McCann, and revenues of the industry, market and consumer research services, are available in the SAS. Because we are unable to gauge the size and prevalence of own-account market and consumer research, the SAS purchases are simply doubled to obtain an estimate for this item. Our estimate for total spending on brand equity thus totals about $240 billion annually from 1998 to 2000. The incidence and costs of employer-provided training was measured in special surveys conducted by the BLS in 1994 and 1995, and the results
Measuring Capital and Technology: An Expanded Framework
29
placed private business spending on workforce training and education in those years at a $106 billion annual rate (table 1.3, lines 8a and 8b). The BLS surveys were designed both to yield unbiased economywide estimates of the costs of employer-provided training and to capture conceptually just what the intertemporal model wants: the total spending figure includes both direct firm expenses (outlays on instructors, tuition reimbursements, and the like) and the wage and salary costs of employee time spent in formal and informal training. The figures for the early and late 1990s shown in the table are extrapolations using (a) the survey’s industry detail on the mid-1990s costs per employee and (b) trends in aggregate educational costs, industry employment, and industry employment costs. Investments in organizational change and development have both ownaccount and purchased components. The own-account component is represented by the value of executive time spent on improving the effectiveness of business organizations—that is, the time spent on developing business models and corporate cultures. The purchased component is represented by management consultant fees. The purchased component is estimated using the SAS annual revenues from the management consulting services industry, which rose substantially in the 1990s, from $27 billion at the start of the decade to more than $80 billion during 1998–2000 (table 1.3, line 9a). The own-account portion is estimated as a proportion of the cost and number of persons employed in executive occupations, which rose very rapidly in the 1990s. Given that executive median pay exceeds the median pay for other employees, the fraction of total private payroll spent on executives and managers is substantial, almost 22 percent in 2000 (Nakamura 2001). Applying the executive and manager payroll share to total private business-sector compensation yields an estimate for managerial and executive costs of nearly $900 billion per year in the 1998–2000 period. If just one-fifth of management time is spent on organizational innovation, then businesses devoted more than $200 billion per year to improving the effectiveness of their organizations during 1998–2000 (table 1.3, line 9b). This figure is highly sensitive, of course, to the admittedly arbitrary choice of one-fifth as the fraction of time managers spend on investing in organizational development and change; as a result, our estimate for this component ranges from $105 billion (based on a one-tenth fraction) to nearly $350 billion (which assumes one-third). Adding in the $80 billion annual expense for management consulting (described above), our point estimate of total spending on organizational change and development is nearly $300 billion per year from 1998 to 2000.14 14. Consulting expenses and the estimated value of executive time conceptually overlap by a small amount (the value of executive time in the management consulting industry). In addition, some portion of management time arguably overlaps with R&D, so that, for some industries, the line between industry-specific process innovation and organizational change more generally may not be easily drawn. But, whatever uncertainty that amount induces in
30
Carol Corrado, Charles Hulten, and Daniel Sichel
Summing Up As indicated at the bottom of table 1.3, our best guesses suggest that business spending on intangibles was about $1.2 trillion annually in the late 1990s, more than 13 percent of GDP. Moreover, we estimate that nominal spending on intangibles grew relative to the total economy during the second half of the 1990s; the ratio of our estimates to GDP in the 1998–2000 period was about 2.5 percentage points more than it was at the start of the decade and during 1993–95. The picture does not change materially when software, which is already capitalized in the NIPAs, is excluded: our grand total for 1998–2000 still exceeds $1 trillion (nearly 11 percent of GDP) and still expands at a much faster rate than it did earlier in the decade. Our estimate for business spending on intangibles in the late 1990s shown in table 1.3 is nearly 40 percent larger than the spending-based estimate issued by Nakamura (2001, 2003) and about 20 percent larger than his estimates based on other methods, such as changes in the occupational structure over time.15 Although we use many of the same data sources, our estimates are larger because we introduce some important new sources of information on spending on intangibles. In particular, the estimates in table 1.3 incorporate official data on the revenue of selected services industries (the Census Bureau’s annual SAS data) and on businesses outlays on employer-provided worker training (the BLS’s special surveys). Also, our estimates include an explicit figure, based on data from a trade source, for new product investment in the motion picture industry. Last, as is consistent with the intertemporal model, our estimates regard spending on each asset type as having an own-account, as well as a purchased, component. Although our estimates of own-account spending are fairly crude, their inclusion affects the scale and trajectory of the overall aggregate results. 1.3.2 Classifying Intangible Spending as Fixed Investment The intertemporal model and framework of section 1.2 would not necessarily classify the grand total shown at the bottom of table 1.3 as business investment in long-lasting knowledge capital. To implement the model’s our estimates, it is dwarfed by the use of an arbitrary fraction for the amount of executive time devoted to organizational change and development. 15. The Nakamura (2001, 2003) spending-based estimate adds together (a) the BEA’s estimate of software spending, (b) double the NSF’s estimate of R&D spending (to capture the nonscientific spending we estimate more explicitly), and (c) the Universal-McCann advertising data. For the 1998 to 2000 period, using figures shown in table 1.3, these items sum to $759 billion, or 8.2 percent of GDP. Nakamura also uses three other methods, or broad sources of information, to estimate intangibles: labor inputs (the relative increase in employment in “creative” occupations over time), corporate operating margins (the relative increase in “administrative” costs), and models of the ratio of consumption to GDP, a ratio that has risen since the early 1980s. All told, his estimates place business spending on intangibles at about our lower bound for the grand total shown in table 1.3.
Measuring Capital and Technology: An Expanded Framework
31
simple rule—investment is deferred consumption—a line must be drawn to separate the expense of current production from outlays that expand future productive capacity. National economic accountants typically make this distinction on the basis of the “durability,” or expected service life, of a purchase. In practice, accountants determine durability by setting a minimum time or “cutoff period” required for an asset to yield services (e.g., one year or three years). If the cutoff period chosen is very short (long), then a larger (smaller) fraction of business expenditures are classified as fixed investments. As a result, the scale of business fixed investment (and thus GDP) in a particular measurement system depends on the durability cutoff period that is chosen. Put differently, although the theoretical model views all expenditures aimed at increasing the range of production possibilities in a future period as fixed investment (recall the discussion of symmetry in section 1.2), the model does not specify the length of time between the current and “a future” period, even though this definition is needed to determine what becomes fixed investment in a particular measurement system.16 Of course, the NIPAs and the international System of National Accounts 1993 (SNA; Commission of the European Communities 1993) currently classify components of intangible spending, either implicitly or explicitly, and the NIPAs are guided by the SNA to some extent. For example, as they do with computer software, the NIPAs treat mineral exploration as fixed investment, and they treat scientific R&D as intermediate consumption, all of which is consistent with the SNA.17 But no single recent statement conveys the NIPA’s “choice” of a durability cutoff for determining what is business fixed investment. The BEA’s practice would appear to approximate a system in which business fixed assets are inputs with a useful service life of at least three years, although a dated study defines business equipment as having an average service life of more than one year.18 16. But note that because what becomes fixed investment depends on the durability cutoff period chosen, what becomes inventory investment also depends on this cutoff period. Any input that is not used up in current period and is intended to be used to increase future consumption is still “capital” in the sense of figure 1.2, even if it is not classified as fixed capital. Such an input would simply be an inventory carried over for future use. An examination of the implications of the intertemporal framework for identifying and valuing intermediates as inventory investment is beyond the scope of this paper. 17. However, the SNA advises national accountants to treat the costs of developing an “artistic or entertainment original” (one of our nonscientific R&D asset types) as investment, a practice not followed in the NIPAs. 18. Moreover, different sectors in the NIPAs apply different accounting periods for capital. According to a more than twenty-year-old study, business and government equipment is defined as durable goods having an average service life of more than one year, whereas, for households, equipment (consumer durables) is defined as durable goods having an average service life at least three years (Young and Musgrave 1980). In practice, the available source data introduce additional twists in BEA practices: for example, the Census of Governments uses five years as a cutoff to determine whether a purchase is equipment. But for the business sector, in which the flows are built using data by asset type according to the commodity-flow
32
Carol Corrado, Charles Hulten, and Daniel Sichel
In the remainder of this section, we review the available evidence and confront the difficult task of determining which items in table 1.3 are fixed investment. Although we faced many hurdles in assembling the data in table 1.3, we were on a path that others (the OECD, Nakamura, Lev) have followed. At this juncture, however, we are on our own in applying the full logic of the intertemporal model to the development of measures for business fixed investment that include intangibles. Not surprisingly, the challenges we now confront exceed those we faced in the construction of table 1.3. Table 1.4 works with five subaggregates of the detailed items reported in table 1.3 and summarizes our results. The first three columns report the table 1.3 figures for total spending, and the last three columns show our corresponding estimates for capital spending. Because the capital spending estimates were derived by considering whether some, rather than all, of total spending should be classified as investment in long-lasting knowledge capital, the capital spending estimates for some items are lower than the total spending estimates. Also, because we don’t know whether to apply a service life cutoff of one year or three years, ranges are shown when the available evidence suggests that this choice may make a difference. The capital spending columns of table 1.4 were developed in four steps: first, if economic research has clearly demonstrated that a given type of spending is fixed investment in the sense of our model, then we categorize 100 percent of the total spending as capital spending. For example, scientific R&D (line 2a) is unequivocally a long-lived investment; so is employerprovided training (a component of line 3b).19 Second, if economic research suggests that only a portion of the spending on an intangible pays off in a future year (or years), we apply these findings. For example, although the marketing literature finds that the effects of advertising are generally short lived, some advertising—apparently, more than half—has a service life of at least one year; a smaller fraction—perhaps one-third—makes a cutoff of three years (Landes and Rosenfield 1994).20 As a result, the estimates for method (see Grimm, Moulton, and Wasshausen, chap. 10 in this volume), the NIPAs currently do not recognize any assets having a service life of less than three years. (The authors thank Brent Moulton, associate director of National Economic Accounts at the BEA, for the information in this footnote.) 19. The very high rates of return to R&D investments found in microeconomic studies are summarized in National Science Board (2000, p. 7-18); see also Griliches (1984). Studies that document the returns to employer-provided training include, among others, Bartel (1991, 1994) and Black and Lynch (1996). 20. Specifically, of the twenty two-digit Standard Industrial Classification (SIC) industries examined by Landes and Rosenfield, the implied annual geometric rate of decay of advertising was about 55 percent or less for seven industries (furniture; paper; chemicals; fabricated metals; transportation equipment; instruments; and building materials, hardware, and garden supplies) and about 65 percent to 70 percent for seven more (food, rubber and plastics, industrial machinery, electrical machinery, miscellaneous manufacturing, apparel retailers, and business services). For the remaining six industries (wholesale distributors of durable goods, general merchandisers, eating and drinking places, miscellaneous retailing, investment offices, and hotels), the effects of advertising dissipated within one year.
261
164
126
135
69
1993–95
407
235
223
201
154
1998–2000
The effects of some advertising dissipate within one year, but more than half has effects that last more than one year; a smaller fraction, perhaps one-third, makes a three-year cutoff.b Firm-specific training is fixed investment. Spending for organizational change also is likely long lived, but a portion of management fees probably is not capital spending.
Yields relatively long-lasting returns and is fixed investment. A portion of new product development costs in the entertainment industry apparently is short lived.
Firms capitalize only a fraction of purchased software in financial accounts. Relatively little is known about service life of software assets.
Comments on evidence as capital spending
8 to 9 .8 to .9
425–490 (70)
170
40–75
114 (9) 75–90 (21)
35–40 (41)
1988–90
8 to 9 .9 to 1.0
570–660 (105)
235
50–95
135 (10) 100–125 (25)
55–70 (69)
1993–95
10 to 12 1.0 to 1.2
940–1,085 (205)
365
70–140
201 (16) 180–225 (40)
125–155 (151)
1998–2000
Estimated capital spendinga
Notes: Components may not sum to totals because of rounding. All results are rounded to the nearest $5 billion. Numbers in parentheses are the amounts already included in the NIPAs. a Ranges correspond to estimates using a service life cutoff of one (upper) to three (lower) years. b See text note 20. Because the relatively longer-lived ads are those aimed at selling specific products, rather than promoting sales at specific stores, the “national” portion of the Universal-McCann data (nearly 60 percent of the total) plus two-thirds of purchased marketing and consumer research is used as a measure of the spending that makes a one-year cutoff. This figure is reduced by 50 percent to obtain the estimate corresponding to a three-year cutoff, and the result is rounded to the nearest $5 billion.
a. Percent of GDP b. Ratio to tangible capital spending
4. Total (1 + 2a + 2b + 3a + 3b)
188
134
3. Economic competencies a. Brand equity
b. Firm-specific human and structural resources
91
114
41
b. Nonscientific R&D
2. Innovative property a. Scientific R&D
1. Computerized information (mainly computer software)
1988–90
Total spending
Business intangibles: Total spending and capital spending
Type of asset or spending
Table 1.4
34
Carol Corrado, Charles Hulten, and Daniel Sichel
capital spending on brand equity (line 3a) are noticeably smaller than the corresponding total spending figures. Our third and fourth steps are less precise: when we have a strong suspicion that the lifetime of a type of intangible may not be at least three years, we discount the item by 20 percent (an arbitrary amount) and show a range for our estimate for capital spending for the item. The lower bound of this “lifetime ignorance” range is the discounted figure and the upper bound is the total spending figure. For example, BEA’s assumptions of a three-year service life for all prepackaged software and a five-year service life for custom and own-account software are based on indirect and anecdotal evidence (Parker and Grimm 1999). Because many businesses expense recurring software fees and routine purchases of upgrades,21 and because very little is known about the age-efficiency and retirement patterns of software assets, we apply the discount; the software capital spending estimate (line 1) thus shows a range to capture the uncertainty about software lifetimes. Also, the spending for some new products in the entertainment industry— a new television series or a new copyright film—are investments that generate, on average, relatively long-lasting revenue streams; but other spending on new product development pays off very quickly, while still other costs are “paid for” by advertising (Caves 2000). All told, to indicate our ignorance about the service lives of assets in this category, we apply the discount and show a lower bound for nonscientific R&D spending (line 2b) as well. Fourth, when we have a strong suspicion that a portion of the spending in table 1.3 may be for routine tasks or represent current consumption, we discount the point estimate 20 percent. For example, we know very little about the composition of purchased management expertise; but we guess that a portion of these costs are current expenses, so we deduct the arbitrary discount before carrying this item over to the capital spending column. The total on line 3b includes our point estimate for the costs of organizational development and change on own-account that, although highly imprecise, was developed by explicitly considering what was investment. Putting the pieces together, our table 1.4 estimates place business fixed investment in intangibles at nearly $1 trillion in the 1998–2000 period, or about 10–11 percent of existing GDP. Moreover, as indicated on the table, only a portion of this spending (about 2 percent of existing GDP) is currently included in the NIPAs. Indeed, when this is viewed in relation to NIPA business spending on tangible capital—that is, spending on durable equipment and structures—we find that businesses invested in intangibles at roughly the same rate at which they invested in tangibles. 21. Businesses reported to the Census Bureau that they spent about $12 billion on separately purchased, capitalized software in 1998, compared with the BEA’s estimates for business purchases of prepackaged and custom software that were each three and one-half times larger that year. The census figure is from table 8 in the 1998 issue of Annual Capital Expenditures.
Measuring Capital and Technology: An Expanded Framework
35
We recognize, of course, that the figures in tables 1.3 and 1.4 are imprecise and that, in putting them together, we raised many issues that we were unable to resolve: for example, software is a major tool in R&D (and thus the own-account software estimates and the scientific R&D figures likely overlap); remarkably little is known about innovative activity outside of the industries that employ scientists and engineers; and quantifying the scale and service lives of business outlays geared to the acquisition of economic competency is very difficult. All in all, though, one cannot escape the conclusion that business capital spending on intangibles was a very substantial and growing component of the economy in the 1990s—and that a serious attempt to fully account for these intangibles significantly alters one’s view of the scale and composition of business fixed investment. 1.4 Real Output and Productivity Including Business Knowledge Capital: A First Step Given the magnitude and trajectory of our estimates of business investment in knowledge capital, the full recognition of intangible assets in national economic accounting would be expected to have implications for measured changes in real output, output per hour, and TFP. As a first step in answering how much of an effect those changes might be, we adjusted the value of nonfarm business output (the nominal output metric used in most productivity calculations) to include unrecognized intangible investments, and we experimented with how to express the resulting adjusted output series in real terms. Alternative values of nonfarm business output adjusted to include unrecognized investment in intangibles are shown in table 1.5. The alternatives use the table 1.4 capital spending estimates corresponding to a oneyear versus a three-year service life cutoff; as may be seen in table 1.5, the value of output including the unrecognized investment corresponding to a one-year service life cutoff exceeds that using the investment series corresponding to a three-year cutoff. The ratio of the published value of nonfarm business output to our alternative adjusted values (the defined in equation [10]) also is shown in table 1.5. The ratio of published to adjusted is lower for the output figures that incorporate the unrecognized investment corresponding to a one-year cutoff. This implies that economic accounts that include intangibles and adopt a one-year cutoff would show a higher saving rate and a lower labor share than would accounts that adopt a three-year cutoff. However, because the published-adjusted ratio for each alternative is noticeably below one, the inclusion of unrecognized investment in intangibles would imply an economy with a higher saving rate and a lower labor share than implied by the existing published figures, whatever choice is made about the service life cutoff. The conversion of our estimates to real terms is especially difficult
36
Carol Corrado, Charles Hulten, and Daniel Sichel
Table 1.5
Service life cutoff
The value of nonfarm business output adjusted to include unrecognized investment in intangibles, by service life cutoff (annual average for period shown) 1988–1990
1993–1995 In billions of dollars 5,897 5,809
One-year Three-year
4,569 4,504
One-year Three-year
Ratio, published to adjusted ( ) .908 .906 .921 .919
1998–2000
2000–2002
8,049 7,903
8,851 8,688
.891 .908
.889 .906
because intangibles often are owner constructed, or are “difficult-tomeasure” services (Griliches 1994), with no available (or reliable) price deflator. As a result, we use two surrogates, one based on the price of output for the overall nonfarm business sector, the other based on employment costs in the sector. The rationale for using the overall output price as a surrogate is that intangible investments are firm specific and occur broadly across industries; the rationale for using employment costs is that the construction of intangibles is a labor-intensive process, perhaps with little growth in productivity.22 Our estimates of real output using the two deflators are applied to the adjusted output series corresponding to the one-year service life cutoff and shown in table 1.6. In this table we present annualized results for periods that correspond to the final two columns in table 1.1. When the nonfarm business output price is used to deflate the unrecognized business investment, the rates of change in the adjusted real output measure exceed the existing measures of change, suggesting that a move to fully recognize intangibles could raise the growth rate of real output, on average, by a noticeable amount. However, if employment costs are used to deflate the unrecognized investment flows, the resulting rates of change in real output are little different from the published measures of change. The figures shown in table 1.6 thus frame the answer to the question posed at the outset of this paper: how much difference does the theoretically appropriate treatment of knowledge capital make to the productivity estimates shown in table 1.1? The adjustments to real output shown in table 1.6 imply corresponding percentage point adjustments to the rate of change in measured output per hour shown in line 1. All told, therefore, the 22. Under constant returns to scale, the rate of change in the intangible output price is connected via the factor price frontier to the rate of change in the prices of inputs used in the production of the output, less the rate of change in the efficiency, or productivity, of the process. If the rate of productivity change is close to zero, and labor’s share is close to one, employment costs are a good proxy for the output price.
Measuring Capital and Technology: An Expanded Framework Table 1.6
37
Real nonfarm business output adjusted to include unrecognized investment in intangibles, by type of deflator
Type of deflator a
1990 to 1995
1995 to 2002
Average annual percent change Nonfarm business output price 2.98 Private business employment costs 2.85
3.96 3.72
Adjusted, less published b Nonfarm business output price 0.12 Private business employment costs –0.01
0.25 0.01
Memo: Published real nonfarm business outputc
3.71
2.86
a
Uses the investment series that corresponds to a one-year service life cutoff. Percentage points per year. c Average annual percentage change. b
change to measured productivity that would result from the full recognition of intangibles could be as large as that which occurred with the move to include software in business investment—about .25 percentage points per year in recent years. The potentially striking nature of these results underscores both the importance of further work on measuring business investment in intangibles in real terms and the need to obtain additional information on historical investment flows and service lives for the many and varied types of intangibles assets described in the previous section. We have made no estimate of the aggregate real stock of knowledge capital, nor of its impact on TFP, but our results and our framework suggest that intangible investments need to be included in the empirical accounting of factors determining economic growth. 1.5 Conclusion The remarkable performance of the U.S. economy in the second half of the 1990s has refocused attention on identifying the underlying sources of economic growth. In the introduction to this paper, we discussed some of the strands of literature that have, in different ways, started to stir the pot of growth analysis and measurement. Because we, too, share the general sense that the conventional framework and current data are not telling us all that we need to know about the role of knowledge capital in economic growth, we set out in this paper a broader framework for the economic measurement of capital. On the theory side, we described a growth framework that adds an explicitly intertemporal dimension to the standard Solow growth-accounting framework. One important way in which the extended framework is useful is
38
Carol Corrado, Charles Hulten, and Daniel Sichel
that it provides guidance on the “boundary” question of what should be included in investment and therefore what measure of capital should be entered into the production function. The conventional framework provides little guidance on this point, while the extended framework yields the concept necessary to define investment: any use of resources that reduces current consumption in order to increase consumption in the future qualifies as investment. Thus, spending on a host of intangibles—including R&D, copyrights, films, computerized databases, improved organizational structures, brand equity, and so on—should, in principle, be counted as investment. In terms of practice, we assembled data on spending on intangibles (using the types of definitions that national economic accountants might use) to gauge their possible magnitude via the application of our model. The estimates suggested that business fixed investment in intangibles may have been large as the spending on tangible capital—as much as $1 trillion in recent years—and that a significant portion of this amount is excluded from the existing investment figures in the NIPAs. Although we regard our numbers on intangible investment as illustrative, not definitive, the inclusion of heretofore unrecognized business intangible capital in the NIPAs implies that the move could alter the average rate of growth of real output and labor productivity in the late 1990s. We believe that the statistical agencies and the broader research community should construct satellite accounts for as many of the categories of intangible investment as possible. Satellite accounts would illuminate the data hurdles and information needs and position researchers to suggest improved techniques and data sources. At the same time, the use of satellite accounts would insulate the headline accounts from the spotty data that invariably would be used in the satellites. One noteworthy effort in the development of satellite accounts is the work of Fraumeni and Okubo (chap. 8 in this volume), who take a first look at an accounting of GDP that includes scientific R&D. As indicated, we regard this paper, along with other papers in this volume, as one step in a long process. Despite the challenges ahead, we believe that useful progress is being made on measuring and understanding the role of business intangibles in the economy and that substantial further progress is both possible and necessary.
References Advisory Commission to Study the Consumer Price Index. 1996. Toward a more accurate measure of the cost of living. Final Report to the Senate Finance Committee. 104th Cong., 2nd sess., 4 December.
Measuring Capital and Technology: An Expanded Framework
39
Baily, Martin Neil. 1981. Productivity and the services of capital and labor. Brookings Papers on Economic Activity, Issue no. 1:1–65. Bartel, Ann. 1991. Employee training programs in U.S. businesses. In Market failure in training: New economic analysis and evidence on training of adult employees, ed. David Stern and Jozef M. M. Ritzen, 99–134. New York: Springer-Verlag. ———. 1994. Productivity gains from the implementation of employee training programs. Industrial Relations 33 (4): 411–25. Black, Sandra, and Lisa M. Lynch. 1996. Human capital investments and productivity. American Economic Review 86 (2): 263–67. Blair, Margaret M., and Steven M. H. Wallman. 2000. Unseen wealth: Report of the Brookings Task Force on Understanding Intangible Sources of Value. Washington, DC: Brookings Institution. Bosworth, Barry, and Jack E. Triplett. 2003. Services productivity in the United States: Griliches services volume revisited. Paper presented at the Conference on Research in Income and Wealth in Memory of Zvi Griliches. 19–20 September, Washington, DC. Brynjolfsson, Erik, Lorin M. Hitt, and Shinkyu Yang. 2002. Intangible assets: Computers and organizational capital. Brookings Papers on Economic Activity, Macroeconomics: 137–99. Brynjolfsson, Erik, and Shinkyu Yang. 1999. The intangible costs and benefits of computer investments: Evidence from the financial markets. Proceedings of the International Conference on Information Systems, Atlanta, Georgia. Caves, Richard E. 2000. Creative industries. Cambridge, MA: Harvard University Press. Cockburn, Iain, and Zvi Griliches. 1988. Industry effects and appropriability measures in the stock market’s valuation of R&D and patents. American Economic Review 78 (2): 419–23. Commission of the European Communities/Eurostat, the International Monetary Fund, the Organization for Economic Cooperation and Development, the Statistics Division of the former Department for Economic and Social Information and Policy Analysis and the regional commission of the United Nations Secretariat, and the World Bank. 1993. System of national accounts 1993. New York: United Nations. Corrado, Carol, and Larry Slifman. 1999. A decomposition of productivity and costs. American Economic Review 89 (2): 328–32. Denison, Edward F. 1962. The sources of economic growth in the United States and the alternatives before us. New York: Committee for Economic Development. ———. 1964. The unimportance of the embodiment question. American Economic Review 79 (5): 90–94. Griliches, Zvi. 1981. Market value, R&D, and patents. Economic Letters 7:183–87. ———, ed. 1984. R&D, patents, and productivity. Chicago: University of Chicago Press. ———. 1994. Productivity, R&D, and the data constraint. American Economic Review 84 (1): 1–23. Gullickson, William, and Michael J. Harper. 1999. Possible measurement bias in aggregate productivity growth. Monthly Labor Review 122 (February): 47–67. Hall, Bronwyn H. 1993. The stock market value of R&D investment during the 1980s. American Economic Review 83 (May): 259–64. Hall, Bronwyn H., and Robert E. Hall. 1993. The value and performance of U.S. corporations. Brookings Papers on Economic Activity, Issue no. 1:1–34. Hall, Robert E. 2000. E-capital: The link between the stock market and the labor market in the 1990s. Brookings Papers on Economic Activity, Issue no. 2:73–118.
40
Carol Corrado, Charles Hulten, and Daniel Sichel
———. 2001a. The stock market and capital accumulation. American Economic Review 91 (December): 1185–1202. ———. 2001b. Struggling to understand the stock market. American Economic Review Papers and Proceedings 91 (May): 1–11. Hulten, Charles R. 1979. On the “importance” of productivity change. American Economic Review 69:126–36. ———. 2001. Total factor productivity: A short biography. In New developments in productivity analysis, ed. Charles R. Hulten, Edward R. Dean, and Michael J. Harper, 1–47. Chicago: University of Chicago Press. Jorgenson, Dale W., and Zvi Griliches. 1967. The explanation of productivity change. Review of Economic Studies 34 (July): 349–83. ———. 1972. Issues in growth accounting: A reply to Edward F. Denison. Survey of Current Business 52:65–94. Kendrick, John. 1961. Productivity trends in the United States. New York: National Bureau of Economic Research. Khan, Mosahid. 2001. Investment in knowledge. OECD STI Review 42:19–48. Landes, Elisabeth M., and Andrew M. Rosenfield. 1994. The durability of advertising revisited. Journal of Industrial Economics 42 (September): 263–76. Lebow, David E., John M. Roberts, and David J. Stockton. 1994. Monetary policy and “The price level.” Board of Governors of the Federal Reserve System. Mimeograph. Lev, Baruch. 2001. Intangibles: Management, measurement, and reporting. Washington, DC: Brookings Institution. McGratten, Ellen, and Edward C. Prescott. 2000. Is the stock market overvalued? Federal Reserve Bank of Minneapolis Quarterly Review 24 (4): 20–40. Moulton, Brent R., Robert P. Parker, and Eugene P. Seskin. 1999. A preview of the 1999 comprehensive revision of the National Income and Product Accounts. Survey of Current Business 79:7–20. Nakamura, Leonard. 1999. Intangibles: What put the new in the new economy? Federal Reserve Bank of Philadelphia Business Review (July/August): 3–16. ———. 2001. What is the U.S. gross investment in intangibles? (At least) one trillion dollars a year! Federal Reserve Bank of Philadelphia Working Paper no. 01-15. ———. 2003. The rise in gross private investment in intangible assets since 1978. Federal Reserve Bank of Philadelphia. Mimeograph. National Science Board. 2000. Science and engineering indicators—2002. Arlington, VA: National Science Foundation. Nordhaus, William D. 1997. Do real output and real wage measures capture reality? The history of light suggests not. In The economics of new goods, ed. Timothy F. Bresnahan and Robert J. Gordon, 29–66. Chicago: University of Chicago Press. Oliner, Stephen D., and Daniel E. Sichel. 2000. The resurgence of growth in the late 1990s: Is information technology the story? Journal of Economic Perspectives 14 (Fall): 3–22. ———. 2002. Information technology and productivity: Where are we now and where are we going? Federal Reserve Bank of Atlanta Economic Review 87:15–44. Organization for Economic Cooperation and Development (OECD) Secretariat. 1998. Measuring intangible investment: Selected bibliography. Available at http://www1.oecd.org/dsti/sti/industry/indcomp/prod/paper16.pdf. Parker, Robert, and Bruce Grimm. 2000. Recognition of business and government expenditures for software as investment: Methodology and quantitative impacts, 1959–98. Bureau of Economic Analysis. http://www.bea.gov/bea/papers/software .pdf.
Measuring Capital and Technology: An Expanded Framework
41
Shapiro, Matthew D., and David W. Wilcox. 1996. Mismeasurement in the Consumer Price Index: An evaluation. NBER Macroeconomics Annual, ed. Ben S. Bernanke and Julio J. Rotemberg, 93–142. Cambridge: MIT Press. Solow, Robert M. 1956. A contribution to the theory of economic growth. Quarterly Journal of Economics 70 (February): 65–94. ———. 1957. Technical change and the aggregate production function. Review of Economics and Statistics 39 (August): 312–20. ———. 1960. Investment and technical progress. In Mathematical methods in the social sciences, ed. K. Arrow, S. Karlin, and P. Suppes, 89–104. Stanford, CA: Stanford University Press. ———. 1987. Review of The nature of economics, by Jane Jacobs. New York Times, July 12, 1987, 36. Stiroh, Kevin J. 1998. Computers, productivity, and input substitution. Economic Inquiry 36 (April): 175–91. Swan, T. W. 1956. Economic growth and capital accumulation. The Economic Record 32:334–61. Triplett, Jack E. 1999. Economic statistics, the new economy, and the productivity slowdown. Business Economics 34 (April): 13–17. Young, Allan H., and John C. Musgrave. 1980. Estimation of capital stock in the United States. In The measurement of capital, ed. Dan Usher, 23–68. Chicago: University of Chicago Press.
Comment
Edward C. Prescott
My comments are organized as follows. First I state why I think this paper is of great value to macroeconomic researchers: intangible capital investment is large and cannot be ignored when we are addressing many important macroeconomic questions. Then I argue that the intangible capital investment numbers are not sufficiently manipulation-proof by politicians to be included in any official national accounting system. Why Currently Unmeasured Intangible Capital Investment Is Important in Macroeconomic Analyses What I found most interesting in this very interesting paper is table 1.3. Table 1.3 summarizes the authors’ best guess or estimate for the values of intangible investments. They divide intangibles into three categories: computerized information, scientific and creative property, and economic competencies. For the 1998–2000 period the estimated size of business spending on intangible investments is 0.13 times greater than the GDP, which is a large number. I will now discuss the individual components and
Edward C. Prescott is a senior monetary advisor at the Federal Reserve Bank of Minneapolis, the W. P. Carey Chair in economics at Arizona State University, a research associate of the National Bureau of Economic Research, and a 2004 Nobel Laureate in Economics.
42
Carol Corrado, Charles Hulten, and Daniel Sichel
will argue that this large number may greatly understate the size of these intangible investments. Computerized information is primarily software, which is included now in the national income and product accounts (NIPA). For the period considered, 1998–2000, Corrado, Hulten, and Sichel estimate computer software investment at around 0.015 times GDP. If they include estimates of development costs, their total for computerized information is around 0.017 times GDP. These numbers may be conservative. I say this because I know a little about what goes on at the Federal Reserve Bank of Minneapolis. The information technology (IT) department employs 10 percent of this organization’s workforce. Most of these people are not involved in the direct production of services that are supplied by the Minneapolis Fed—check clearing, cash provision, banking supervision, policy advice, protection, and handling Treasury Direct. About half of the time of the IT people is devoted to providing services to these production units within the Minneapolis Fed and therefore is an intermediate good. The other half, however, is allocated to investment activities, which implies a big number for computerized information investment. This suggests that 0.017 times GDP for computerized information is too small a number. The second category is scientific and creative property investments. The largest component here is R&D expenditures that are expensed and therefore not included in the NIPA. R&D expenditures, as estimated by the National Science Foundation (NSF), are around 0.019 times GDP. The smaller categories of technology and innovative property are mineral exploration, copyright and license costs, and other R&D expenditures not counted by the NSF, such as R&D in finance and other service industries. Corrado and her coauthors estimate the total of mineral exploration, copyright and license costs, and R&D expenditures to be about 0.042 times GDP. Again, I think their estimates are conservative. Some R&D activity is mixed with production activities. For example, an engineer in the production sector may be supervising activities or may be studying how to make the production process more efficient. The part of the engineer’s time allocated to making the production process more efficient is R&D investment. The returns to this investment are often captured in what is called learning by doing. Actually, engineers are not the only ones making investments. Production workers try different things and learn from seeing what happens. This is an investment because the knowledge gained enhances future production possibilities. The final category is economic competencies. This category is the most difficult to measure but quite possibly the largest. Corrado and her coauthors include firm-specific human capital, organizational capital, and
Measuring Capital and Technology: An Expanded Framework
43
brand equity. The total for these categories is a large number, 0.064 times GDP during the 1998–2000 period. Let me start with brand equity because better measures are available for this category. Investments for brand equity include expenditures on advertising and market research. Corrado and her coauthors estimate that this investment for advertising is 0.023 times GDP and that market research is 0.002 times GDP. For firm-specific human capital, Corrado and her coauthors use BLS surveys to estimate the costs of developing workforce skills such as on-thejob training and payments for job-related education. They estimate these expenditures to be 0.012 times GDP. On-the-job training is an important investment. Indeed, Heckman, Lochner, and Taber (1998) and Parente and Prescott (2000) estimate it to be 0.075 times GDP. They estimate the worker-financed, on-the-job skill investment using experience wage profiles. There is also some skill-capital investment that is firm specific and is financed by the firm. Given that this skill-capital investment is produced in the business sector, it is part of the output of the business sector. The part financed by the worker is part of labor compensation. On-the-job skill investment is probably closer to 0.075 times GDP than the 0.012 times GDP estimate of Corrado and coauthors. For organization capital, Corrado and her coauthors use information on executives’ wages and management consulting fees. The idea is that executive time spent on investment decisions is a proxy for organizational change and development investment. I think that this number is too small. Many years ago, Prescott and Visscher (1980) wrote a paper on organization capital. We found that efficiencies of organizations improve over time as the workers learn who in the organization knows what. Knowing whom to ask for the answer to a question is almost as good as knowing the answer to that question. People spend a lot of time interacting within organizations, and this interaction has an investment component because it results in knowing how to get things done within the organization. The Corrado, Hulten, and Sichel upper-bound estimate for organizational capital investment is 0.030 times GDP. These estimates for investments, while interesting, are not the end of the story. For some purposes, what we need to know is how large the stock of intangible capital is. This stock has to be important in accounting for differences in output across countries and across time. It is also important in judging whether the stock market is overvalued, undervalued, or correctly valued. We can use the investment numbers from Corrado and coauthors if we have some measure of depreciation to estimate this stock. While there has been some work on determining depreciation rates of intangible capital, it has been limited to depreciation of R&D capital. McGrattan and Prescott (2002) needed an estimate of the corporate
44
Carol Corrado, Charles Hulten, and Daniel Sichel
intangible capital stock in an analysis of the U.S. stock market. We developed an indirect route to the measurement of the corporate intangible net capital stock that does not require knowledge of depreciation. Our approach uses BEA measures of corporate profits, corporate tax liabilities, tangible investment, tangible depreciation, and growth in GDP. Under the assumption that the after-tax returns on tangible and intangible capital are equal, growth theory gives us a formula for intangible capital as a function of these data and the BEA measures of corporate tangible capital stocks. We find that the corporate intangible capital stock is large now and was large in the past. If the NSF’s 11 percent R&D depreciation rate is used for all intangible investment, the upper-end investment numbers for Corrado and coauthors imply intangible capital stock close to our indirect estimate. This consistency is comforting. Parente and Prescott (2000) found additional evidence that investment in intangible capital is a very large number, much bigger than the estimate of Corrado and coauthors. The evidence is provided by the behavior of the Japanese economy during its postwar growth miracle. Parente and I use the growth model with intangible capital and find that only if the intangible investment is very large is the Japanese growth miracle consistent with the theory. If it were not large, convergence to the steady-state growth path would have been too rapid. Should the national accounts be changed to better incorporate intangible investment? I turn now to the question of whether the national income accounts should be modified to include intangible investments. My view is that they should be included if there are market prices and not otherwise. One category of intangible capital investment that it may make sense to include is R&D investment. This investment is monitored by the tax authorities, and the United States gives tax credits for some R&D investments. Those R&D investments for which there are credits could be included as part of output. However, even here, there is the danger that the definition of expenditures that fall into this class will be broadened for political reasons in order to make it appear that the economy is doing better than it is, in fact, doing. Consequently, what should be done in this case is not clear. Perhaps it is better to categorize expenditures as investment only if firms are required to capitalize these expenditures for tax purposes. As my comments indicate, the numbers that Corrado and coauthors have come up with are conservative, and I think that intangible capital investment is larger than their estimate. However, in being conservative in their estimates, I think that they showed good judgment. Only hard numbers should be reported in even a supplementary accounting system. To conclude, this is an important paper that I am sure will spur further research on this very important topic.
Measuring Capital and Technology: An Expanded Framework
45
References Heckman, James J., Lance Lochner, and Christopher Taber. 1998. Tax policy and human-capital formation. American Economic Review 88 (2): 293–97. McGrattan, Ellen R., and Edward C. Prescott. 2002. Taxes, regulations, and the value of U.S. corporations: A general equilibrium analysis. Research Department Staff Report no. 309. Federal Reserve Bank of Minneapolis. Parente, Stephen L., and Edward C. Prescott. 2000. Barriers to riches. Cambridge: MIT Press. Prescott, Edward C., and Michael Visscher. 1980. Organization capital. Journal of Political Economy 88 (3): 446–61.
2 A New Approach to the Valuation of Intangible Capital Jason G. Cummins
2.1 Introduction In most circumstances, there are no direct measures of the return on intangible capital. As a result, researchers have relied primarily on the equity market to infer the value of intangibles. This valuation method is straightforward: if the equity market reveals the intrinsic value of the firm, then subtracting the value of the firm’s tangible assets from its market value reveals the value of the firm’s intangible assets. Using this method, Robert Hall (2001) argued that U.S. companies accumulated an enormous stock of intangible capital in the 1990s.1 Despite the appealing simplicity of the equity market approach to measuring intangibles, it should be used with considerable caution. According to this method, Yahoo!’s intangibles were worth more than $100 billion in 2000. However, they were worth less than one-third of that amount in 2003. To be sure, this drop does not necessarily pose a problem for the equity market approach. Yahoo!’s market capitalization may reflect changes in Jason G. Cummins is a senior economist in the Division of Research and Statistics at the Federal Reserve Board. Baruch Lev has been instrumental in shaping my thinking about intangible assets. I thank him for his guidance and for providing me with the data set on information technology investment that he and Suresh Radhakrishnan put together. I am also indebted to Stephen Bond, whose collaboration laid the foundation for this research. Daniel Cooper provided research assistance. Ned Nadiri, the editors, conference participants, and Darrel Cohen all provided helpful comments and suggestions. I/B/E/S International Inc. provided the data on earnings expectations. The views presented are solely mine and do not necessarily represent those of the Board of Governors of the Federal Reserve System or its staff members. 1. The idea that the stock market reveals the quantity of capital in the absence of rents and adjustment costs was stated clearly by Baily (1981), who interpreted the stock market data from the 1970s as showing that energy price shocks effectively destroyed a great deal of capital.
47
48
Jason G. Cummins
expected profits or expected returns or both. But this example illustrates a potential pitfall of relying on the equity market to reveal the value of intangible capital. This value will be mismeasured to the extent that asset prices depart from their intrinsic values. The main drawback of the equity market approach is that it presents a catch-22: investors must have information about intangibles to value them, but investors do not have the information they need because intangibles, by their very nature, are extraordinarily difficult to value. This circularity calls into question the underlying assumption of the equity market approach—that markets are strongly efficient. How can the value of the firm as revealed by equity markets be equal to the intrinsic value of the firm— defined as the present value of expected cash flows—when market participants know so little about the value of intangibles? To create an alternative proxy for the intrinsic value of the firm, I construct the discounted value of expected profits from analysts’ forecasts. I/B/E/S has collected data on profit forecasts for a large sample of companies since 1982. The analysts forecast profits for one and two years ahead as well as the growth rate of profits out to a five-year horizon. In making their forecasts, analysts assess whether a new supply-chain management system, say, is expected to add to intangible capital and, as a result, generate additional profits. Thus, if analysts expect intangibles to contribute materially to a company’s bottom line over a five-year period, then their forecasts should reflect the value of these intangibles. Of course, analysts’ forecasts are not foolproof. After all, the majority of analysts appear to have overestimated the growth rates of intangibleintensive companies in the late 1990s. And analysts offer little guidance about how to discount these forecasts. In fact, the discounted value of expected profits may be just as poor a proxy for a firm’s intrinsic value as the stock market is. However, these two proxies deviate from a firm’s intrinsic value for different reasons. The stock-market-based measure reflects any market inefficiency, whereas the analyst-based measure reflects the biases of analysts and any mistakes in the way the forecasts are discounted. The econometric setup explicitly recognizes that the two proxies measure the firm’s intrinsic value with different kinds of error. Ultimately, identification of the model’s parameters depends on whether there are informative instrumental variables that are uncorrelated with the measurement errors in the two proxies. Theory offers little guidance about the nature of the measurement errors, and consequently, identification is an empirical issue that must be investigated with diagnostic tests, such as the test of the model’s overidentifying restrictions. For my empirical work, I compiled a data set that distinguishes firms’ expenditures on tangible capital, information technology (IT), and intellectual property (IP). Relying on these data, I use the stock-market- and analyst-based measures of the firm’s intrinsic value to estimate the return
A New Approach to the Valuation of Intangible Capital
49
on each type of capital. Perhaps the most interesting finding is that organizational capital created by IT generates a sizable, albeit imprecisely estimated, return. Despite its magnitude, the estimated return is considerably smaller than comparable estimates in previous studies. To build a bridge to the previous research, I show that much larger estimates can be obtained with ordinary least squares (OLS), a method that ignores the possibility that the value of the firm and its investment policy are simultaneously determined. 2.2 The Valuation of Intangible Capital 2.2.1 Intangible Capital: An Instrumental Definition I distinguish between two types of intangibles: IP and organizational capital. Broadly defined, IP includes patents, trademarks, copyrights, brand names, secret formulas, and so on. For my purposes, I define organizational capital as business models, designs, and routines that create value from information technology. Without a doubt, organizational capital is a broader concept than this narrow definition suggests. For example, innovative compensation policies and effective training programs are surely part of organizational capital. Indeed, the systematic focus on creating organizational capital can be traced to industrial pioneer Fredrick Winslow Taylor and his intellectual forebears. I adopt a definition based on IT not because IT is qualitatively different from any other method or technology that aids organizational efficiency but because sizable, measurable outlays are devoted to it. This two-part taxonomy suits my empirical model and the data. The data warrant brief explanation. Companies report expenditures on R&D and advertising, which create what I have defined as intellectual property. These expenditures can be capitalized to create the IP capital stock. Such a stock may seem essentially arbitrary—companies offer little guidance, for example, about how R&D and advertising depreciate—but the stock of property, plant, and equipment is a similarly unpalatable concept, even though researchers have become sufficiently inured to it.2 As a practical matter, one must also distinguish between IP and organizational capital because outlays on R&D, advertising, and IT have behaved differently over time. In particular, R&D and advertising appear to be declining in relative importance. Outlays on IT have soared while advertising as a proportion of nonfinancial corporate gross domestic profit has grown 2. Indeed, the accounting for physical assets in financial statements may be as deficient as the accounting for IP. Physical assets are capitalized at historical costs and are depreciated in ways that may be poor approximations of their service flow. Perpetual inventory capital stocks constructed from such data may also be only loose approximations of the service flow of capital.
50
Jason G. Cummins
modestly, from 3.9 percent in 1980–89 to 4.1 percent in 1990–97. The comparable figures for R&D are 2.3 percent and 2.9 percent (Nakamura 1999). Hence, if intangibles create extraordinary gains in firm value, then arguably the most plausible driver is organizational capital, not IP. So what exactly is organizational capital? As a purely mechanical matter, I define organizational capital as an adjustment cost from IT investment, defined as the difference between the value of installed IT and that of uninstalled IT.3 Suppose a company purchases database software. By itself, database software does not generate any value. At a minimum, the software must be combined with a database and, perhaps, a sales force. Organizational capital defines how the database is used and, consequently, how software investment creates value. Another example helps illustrate the definition. Dell’s value depends on a unique organizational design that sells build-to-order computers directly to customers. Dell’s tangible capital stock differs little from that of Hewlett-Packard (HP), one of its main competitors, because both companies assemble computers. The reason that any given piece of tangible capital is more valuable at Dell than at HP relates to Dell’s unique business model and routines, organizational capital that combines the usual factors of production in a special way. Hewlett-Packard cannot simply replicate Dell’s tangible capital stock and become as profitable as Dell. Hence, it does not make sense to think about organizational capital, or intangibles more generally, as separate factors of production that can be purchased in a market. In most cases, intangibles are so closely connected with traditional factor inputs like a computer or a college graduate that their valuation as stand-alones is nearly impossible (see, e.g., Lev 2001). This definition of organizational capital contrasts sharply with the tendency in the literature to think about intangible capital as being much like any other quasi-fixed factor of production. In that mold, firms buy intangibles as they would buy machinery. But intangibles, by and large, are different from other factors because companies cannot order or hire intangibles. Rather, intangible capital typically results from the distinctive way companies combine the usual factors of production. Treating intangibles as inputs misses this point altogether. The model in the next section formalizes this observation by defining intangibles as whatever makes installed inputs more valuable than uninstalled inputs—that is, whatever makes a Dell out of the same computers and college graduates that HP can buy. Realistically, this definition is not exhaustive because some intangibles are not associated with specific ex3. This rather narrow definition based on IT adjustment costs builds on a broader interpretation of organizational capital in terms of adjustment costs, as in, for example, Prescott and Visscher (1980).
A New Approach to the Valuation of Intangible Capital
51
penditures. For example, a good idea—in Dell’s case, selling computers over the Internet—can be thought of as a type of intangible capital. Nevertheless, most intangibles are closely associated with some sort of outlay; after all, at least some investment is usually needed to make a good idea profitable. My definition of organizational capital may seem similar to the more familiar concept of multifactor productivity (MFP) or IT-biased technical change. Indeed, organizational capital is like IT-biased technical change in that it boosts the marginal product of IT capital. However, the concepts differ in a critical respect: Organizational capital is costly to create; in contrast, IT-based technical change and MFP require no specific outlays, and for that reason they are called “manna from heaven.” Organizational capital should also be distinguished from embodied technical change. Whereas embodied technical change characterizes the capabilities of a particular asset—disk drives are more efficient and reliable than they used to be—organizational capital depends on how the firm uses an asset. In the example discussed above, both Dell and HP can buy the same technology embodied in a new disk drive, but the drive is more valuable at Dell because of Dell’s superior organizational capital. 2.2.2 Theoretical Model The model is a straightforward variant of the one developed by Hayashi and Inoue (1991), who derived an expression for the value of a firm with multiple capital goods; it follows the derivation in Bond and Cummins (2000b). Using a method similar to mine, B. Hall (1993a) relied on Hayashi and Inoue’s model to estimate the rate of return on R&D. The novel twist in my application is the idea that intangibles are like adjustment costs and therefore can be estimated econometrically. In each period, the firm chooses to invest in each type of capital good: It (I1t , . . . , INt ), where j indexes the N different types of capital goods and t indexes time.4 This decision is equivalent to choosing a sequence of capital stocks Kt (K1t , . . . , KNt ), given K t–1, to maximize Vt , the cum-dividend value of the firm, defined as (1)
Vt Et
∑ (K , I , ε ) t s
st
s
s
s
,
where Et is the expectations operator conditional on the set of information available at the beginning of period t; ts discounts net revenue in period s back to time t; and is the revenue function net of factor payments, which includes the productivity shock εs as an argument. is linear homoge4. The firm index i is suppressed to economize on notation except when it clarifies the variables that vary by firm.
52
Jason G. Cummins
neous in (Ks, Is), and the capital goods are the only quasi-fixed factors or, equivalently, variable factors have been maximized out of . For convenience in presenting the model, I assume that the firm pays no taxes, issues no debt, and has no current assets, although these considerations are incorporated into the empirical work. The firm maximizes equation (1) subject to the series of constraints: K j,ts (1 j ) K j,ts1 I j,ts s 0,
(2)
where j is the rate of physical depreciation for capital good j. In this formulation, investment is subject to adjustment costs but becomes productive immediately. Furthermore, I assume that current profits are known so that the firm, when choosing Ijt , knows both the prices and the productivity shock in period t. Other formulations such as one including a production lag, a decision lag, or both are possible, but this specification is more parsimonious. Let the multipliers associated with the constraints in equation (2) be j,ts . Then the first-order conditions for maximizing equation (1) subject to equation (2) are
∂t jt ∂Ijt
(3)
∀j 1, . . . , N
and
∂t jt
(1 j )tt1Et ( j,t1) ∂Kjt
(4)
Et
∑ (1 ) t s
s0
j
s
∀j 1, . . . , N
.
∂K ∂ts
j, ts
Combining equations (3) and (4) and using the linear homogeneity of (Kt, I t, εt ), I get the following result: N
∑ (1 )K jt
j
j,t1
εt t tt1 Et
j1
Et
∑
t ts
s0
N
∑
(1 j )Kjt
j,t1
j1
ts Vt .
Hence, the value of the firm can be expressed as the sum of the installed values of the beginning-of-period capital stocks, which, according to equation (2), are equal to the difference between the current capital stock and the current investment. Because three types of capital are included in the empirical work, the specific equation considered is (5)
Vt K (Kt It ) KIT (KITt ITt ) KIP(KIPt IPt ) εt ,
A New Approach to the Valuation of Intangible Capital
53
where investment in tangible capital (excluding IT), information technology, and intellectual property are I, IT, and IP, respectively; the capital stock (excluding IT) is denoted by K, and the IT and IP capital stocks are distinguished by appending IT and IP. According to equation (3), the multiplier on each capital stock is the gross marginal cost of an additional unit of capital, which is equal to the price of capital including adjustment costs. To be more concrete, I posit an adjustment cost function, C, that is additively separable from the net revenue function: (6)
∂C jt pj
. ∂Ij
In this equation, the purchase price of capital is distinguishable from marginal adjustment costs, which are additional outlays needed to make investment productive. This separation is attractive because adjustment costs such as the costs of training workers to use new equipment and of integrating new and old equipment create intangible capital.5 Moreover, regarding empirical research, we have a well-developed literature on estimating adjustment costs econometrically, whereas we have no practical way of directly measuring the value of intangible capital from available data. In fact, the estimated marginal adjustment costs are equal to the return on intangible capital in equilibrium. That is, note that firms will invest until the gross marginal cost of an additional unit of capital in equation (6) is equal to the marginal product of capital, defined by equation (4), also known as the Euler equation. Therefore, the equilibrium return on intangible capital can be equated with adjustment costs. Returning to the Dell-HP example, one might be tempted to characterize the difference between Dell and HP by saying that the level of MFP is higher at Dell than at HP. But this characterization is not sufficiently informative because it does not explain why Dell produces more with less. In contrast, the valuation equation (5) shows that it is possible to trace the sources of Dell’s superior valuation to its intangible capital, specifically the intangible capital associated with its previous investments in IT and IP. New software, say, is more valuable at Dell because of the way it is used. Although this approach is more informative than the one that attributes any difference to MFP, admittedly it still falls short. In particular, this approach fails to explain how software became more valuable at Dell; estimating (5) provides no blueprint for creating value. To gain further insight, we need considerably better data and more-detailed case studies. Interpreting the estimates of equation (5) is more complicated than it 5. For example, Hempell (2003) finds broad evidence that firms complement IT spending with training programs for their employees (see also Bresnahan, Brynjolfsson, and Hitt 2002). According to Hempell’s empirical results, firms that invest intensively in both training and IT perform significantly better than do competitors that forgo such investment.
54
Jason G. Cummins
may seem at first glance. Although the multipliers are assumed to be constant, the value of intangible capital can vary over time and across firms; indeed, the comparison of Dell with HP suggests that this variance is a realistic possibility. Regrettably, the empirical framework is not rich enough to accommodate this consideration. In practice, the problem is not as bad as it may seem because I control for firm- and time-specific effects. Nevertheless, to the extent that the multipliers are not constant after controlling for these effects, the empirical estimates of the multipliers will be averages across firms and time.6 Hence, econometricians must exercise extreme caution when interpreting the estimates as structural parameters, which they are not; rather, the estimates reveal the average return on IP and organizational capital. Finally, this limitation is not unique to my formulation. On the contrary, my formulation is closely related to production- or costfunction estimation, in which the parameters are assumed to be constant across firms and time despite the debatable case for such an assumption. 2.3 Estimating the Empirical Valuation Equation Estimating the empirical valuation equation (5) would be straightforward if data on the intrinsic value of the firm were available and the error term were an innovation. As I will discuss in turn, neither of these conditions is likely to hold. As a result, estimates based on OLS will be biased. Identification is still possible in certain circumstances with generalized method of moments (GMM). However, the GMM approach does have some notable drawbacks, which I discuss in the final subsection. Two primary issues affect the estimation of equation (5):
• The econometrician cannot observe the intrinsic value of the firm. What I have called the equity market approach explicitly assumes that the stock market value of the firm, V E, equals the intrinsic value of the firm, V. Alternatively, one can argue that any market mismeasurement is orthogonal to the firm’s current capital stocks and investments. Because both of these conditions are at least suspect, I propose an alternative that arguably rests on firmer footing. • The econometrician also cannot observe the productivity shock, ε, such as a new product or process, but this shock affects both the value of the firm and its investment policy. As a result, OLS estimates will be biased. Instead of OLS, I use the system-GMM estimator proposed by Blundell and Bond (1998, 2000). They show that the system-GMM estimator performs well when there are fixed effects and the endogenous variables have near unit roots, as is true of all three types of capital. 6. Cross-sectional estimation does not sidestep this problem entirely because the estimates will still be averages across firms. Moreover, cross-sectional estimation is inadvisable because it does not control for firm-specific effects.
A New Approach to the Valuation of Intangible Capital
55
2.3.1 Unobservable Value of the Firm The most widely used proxy for the intrinsic value of the firm is its stock market value. According to one view of the stock market, this approach makes good sense because share prices reflect the discounted value of expected future distributions from the firm to shareholders. If they do, share price movements can be explained in one of two ways: as changes in expectations about future profits, changes that support future dividend payments; or as fluctuations in investors’ required rates of return. Hence, from the early 1990s to 2000, the rise in share prices of intangible-intensive companies may have been due to advance news of unprecedented profit growth. Alternatively, prices may have increased because investors decided that the stock market was much less risky than they had previously believed. As a result, they reduced their required rates of return. For example, Siegel (1998) argues that stocks, not bonds, have been the safest long-term investment vehicle. Accordingly, investors may have realized that they were irrationally fearful of stocks. When stocks are seen as posing little risk, rational investors will bid up stock prices. In other words, they will decide that the equity premium was too high in the past but that it is just right now.7 Another view of the stock market cautions that share prices may sometimes have a life of their own, apart from the intrinsic level represented by the discounted value of future distributions. Observers have long recognized the theoretical possibility that share prices deviate from their intrinsic values because of a rational bubble.8 Outside this particular paradigm, numerous models show that noise traders, fads, or other psychological factors influence share prices. Although I cannot explain the disconnect between asset prices and their intrinsic values, I can cite two well-known examples of this phenomenon: tulip prices in 1634–37 and Japanese share prices in 1989. These anomalies in price behavior are difficult to dismiss on empirical grounds. The recent stock market boom and at least partial bust may be another such anomaly. Indeed, Shiller (2000) argues that investors have not learned that the stock market is less risky than they had previously thought. Rather, for a whole host of reasons, investors have been and continue to be “irrationally exuberant.” Highlighting the key distinction between these two views of the stock market is important. Whereas the first view treats market efficiency as a 7. McGrattan and Prescott (2000) use this argument to conclude that “it is troubling that economic theory failed so miserably to account for historical asset values and returns while, at the same time, it does so well in accounting for current observations.” The “current observations” in their study date from the beginning of 2000, so apparently economic theory needs some help in explaining the subsequent downturn (see also Kiley 2000). 8. A rational bubble occurs when the expected discounted future price does not converge to zero in the limit. Both theoretical and empirical arguments can be used to rule out rational bubbles (see, for example, Campbell, Lo, and MacKinlay 1997, chap. 7). Hence, rational bubbles are unlikely to offer a persuasive explanation for behavior of financial markets.
56
Jason G. Cummins
maintained hypothesis, the second treats market inefficiency as a maintained hypothesis. To illustrate the implications of this distinction, I pick a stream of expected profits. The first theory tells us what the (possibly timevarying) discount rate (that is, the return) must be to justify the observed stock price. The second theory tells us that the stock price differs from its intrinsic value for some reason outside the basic model—bubbles, noise traders, fads, or the like. It is very difficult to determine which of these explanations is preferable because they both rely on unobservable factors to explain the same data. If one is to have any confidence in either explanation, one must exploit the testable implications of the dynamic stochastic structure of the unobservable factors. Toward this end, I created a model based on joint research with Stephen Bond (2000a, 2000b, 2002). Suppose the stock market reveals the intrinsic value of the firm with some error, so that (7)
V Et Vt t ,
where t is the measurement error in the equity valuation V Et , regarded as a measure of the intrinsic value Vt . Substituting V ET for Vt in equation (5) then gives the empirical valuation equation with noisy share prices: (8) V Et K (Kt It ) KIT(KITt ITt ) KIP(KIPt IPt ) (t εt ). Let us consider the effect of measurement error on the model’s dependent variable and ignore the difficulty presented by the unobservable productivity shock, which is considered in the following section. The conventional wisdom is that measurement error of this type biases the standard errors but not the coefficient estimates (see, for example, Hausman 2001). However, this wisdom is false when the measurement error is correlated with the explanatory variables. To illustrate the argument, I consider a simplified version of equation (8) in which the firm has only IT capital. The coefficient estimate on IT capital—call it bKIT —will consist of the true return on IT, KIT, and the bias caused by measurement error: p lim bKIT KIT ,KIT , where ,KIT is the coefficient estimate from a hypothetical regression of the measurement error on IT capital: ,KIT COV(, KIT) / VAR(KIT). Clearly, no bias occurs if COV(, KIT) 0; the measurement error is uncorrelated with the regressor, and the conventional wisdom about measurement error in the dependent variable is correct. However, if the stock market overestimates the value of IT-intensive companies, then ,KIT 0, and therefore the return on IT investment will be upwardly biased. Because my sample is skewed toward those companies commonly thought to have been overvalued compared with fundamentals—companies in the 1990s with big IT budgets—this bias could be substantial. If the stock market un-
A New Approach to the Valuation of Intangible Capital
57
derestimates the value of IT-intensive companies, the bias will go in the other direction. Indeed, this type of downward bias implies that the true return on investment exceeded the estimated return during periods like the 1970s, when the stock market was arguably undervalued compared with fundamentals. In addition, one cannot sign the bias based on a priori reasoning in the multivariate case, but the estimated returns on IP and tangible capital are also likely to be biased. However, the IT and IP coefficients seem likely to be severely affected because the stock market appears to have overestimated the value of intangible-intensive companies in the 1990s. Rather than using the stock market to infer the value of intangibles, I rely on analysts’ profit forecasts. Intangible assets create value only to the extent that they are expected to generate profits in the future. Professional analysts are paid to forecast the future profits of the firms they track, and leading analysts are paid very well indeed for performing this role. Thus, one can ask whether analysts are forecasting profit growth in line with the intangible asset growth that seems to be implied by stock market valuations. Though the popular press regularly lambastes analysts for being far too optimistic, the answer is no.9 After introducing the data in the next section, I show that analysts’ forecasts of future profits are informative. Combining these forecasts with a simple assumption about the discount rates tts , I construct an alternative estimate of the present value of current and future net revenues as (9)
Vˆt Et t tt1t1 . . . tts ts .
I then use this estimate in place of the firm’s stock market valuation. Clearly the estimate Vˆt will also measure the firm’s intrinsic value Vt with some error . The potential ways of introducing measurement error include truncating the series after a finite number of future periods, using an incorrect discount rate, and using analysts’ forecasts, which project net profits rather than net revenues. The resulting empirical valuation equation is (10)
Vˆt k(Kt It ) KIT (KITt ITt ) KIP(KIPt IPt ) ( t εt ).
As discussed in the following section, identification will depend on whether the measurement error is uncorrelated with suitably lagged values of instruments such as capital stocks. This event seems plausible because the current measurement error obtained with analysts’ forecasts is unlikely to 9. Armed with a time-varying, firm-specific discount rate, one can equate any stream of profit forecasts to observed stock prices at every observation; without additional restrictions an infinite number of paths of time-varying discount rates can equate the two. The key point is that extreme assumptions would be required to obtain the VE’s in the sample from the analysts’ forecasts of future profits. Share prices in my sample appear to be high in relation not only to current profits but also to the best available forecasts of likely future profits.
58
Jason G. Cummins
be correlated with lags of the capital stock. Ultimately, however, this question is an empirical one that can be investigated with tests of overidentifying restrictions. 2.3.2 Unobservable Productivity Shock Despite some important differences, empirical valuation equations (8) and (10) resemble production functions. This similarity is unfortunate because, as Griliches and Mairesse (1999) say, “In empirical practice, the application of panel methods to micro-data have produced rather unsatisfactory results.” Mairesse and Hall (1996) show that attempts to control for unobserved heterogeneity and simultaneity—both likely sources of bias in the OLS results—have produced implausible estimates of production function parameters. To be more specific, in my model I assume that the unobservable productivity shock consists of a firm-specific, a time-specific, and an idiosyncratic component. In this case, applying GMM estimators, which take first differences to eliminate unobservable firm-specific effects and use lagged instruments to correct for simultaneity in the first-differenced equations, has produced especially unsatisfactory results. Blundell and Bond (1998, 2000) show that these problems are related to the weak correlation between the regressors and the lagged levels of the instruments. This insignificant correlation results in weak instruments in the context of the first-differenced GMM estimator. Bond and Blundell show that these biases can be dramatically reduced by incorporating more informative moment conditions that are valid under quite reasonable conditions. Essentially, their approach is to use lagged first differences as instruments for equations in levels, in addition to the usual lagged levels as instruments for equations in first differences. The result is the so-called system-GMM estimator, which I use as the preferred estimator. I then use DPD98 for GAUSS to perform the estimation (Arellano and Bond 1998).10 I conduct two types of diagnostic tests for the empirical models. First, I report the p-value of the test proposed by Arellano and Bond (1991) to detect first- and second-order serial correlation in the residuals. The statistics, which have a standard normal distribution under the null, test for nonzero elements on the second off-diagonal of the estimated serial covariance matrix. Second, I report the p-value of the Sargan statistic (also known as Hansen’s J-statistic), which is a test of the model’s overidentifying restrictions; formally, it is a test of the joint null hypothesis that the model is correctly specified and that the instruments are valid. 2.3.3 Limits of the Empirical Approach If the GMM-based empirical approach is successfully implemented then that is the end of the story in most applications. However, intangible assets 10. In all specifications, I capture time effects by including year dummies in the estimated specifications.
A New Approach to the Valuation of Intangible Capital
59
pose a special problem. According to my model, intangibles are associated with specific investments, but clearly that is not the whole story; sometimes intangibles are not associated with any identifiable outlay. In that case, at least some of the intangibles end up in the error term as an omitted variable or as part of the unobservable productivity shock. To fix ideas, let us suppose the fixed effect in the unobservable productivity shock represents intangible capital. If the fixed effect embeds intangible capital in this way, the econometric solution may be worse than the problem. In particular, taking first differences will sweep out the effect of fixed intangible capital. As a result, the possibility that intangible capital determines the level of the firm’s intrinsic value will be completely missed. Let us take another interesting example: MFP is normally thought of as a black box, but perhaps this box is full of what researchers mean by intangibles. Indeed, many of the examples used to illustrate the role that intangibles play in organizations have the flavor of MFP. That is, intangible capital comes from a good idea, like—in Dell’s case—selling computers over the Internet or from a unique corporate culture created by a chief executive officer like Jack Welch or Bill Gates. Nevertheless, most intangible assets appear to be created by investment, as I argued in the introduction. After all, Dell cannot sell computers over the Internet without its own computers, and Microsoft spends more than $5 billion annually on R&D and advertising. In summary, if one were to pursue an estimation strategy like GMM with instruments that were arguably orthogonal to the error term, one might recover something closer to the direct impact of any asset on market value. However, one would by construction miss the role of omitted intangibles or intangibles that underlie the productivity shock. Thus, such instrumental variable strategies could be informative, but they could not provide the full set of answers about the role of intangibles. In fact, Brynjolfsson, Hitt, and Yang (2000, 2002) have taken this argument one step further: they say that the effect of intangible capital can be indirectly inferred from OLS estimates of the return on IT capital. Two points are worth making about this argument: the first is methodological and the second empirical. First, OLS cannot be used to separate out all the direct and indirect effects of intangible capital. In particular, the return on, or the stock of, intangible capital cannot be inferred from the biased OLS coefficient on IT capital. When intangible capital (IC) is an omitted variable and IT capital is the only other type of capital, a straightforward analysis of omittedvariable bias reveals that the coefficient on IT capital is p lim bKIT KIT KIC KIC,KIT , where KIC is the return on intangible capital and KIC,KIT is the coefficient estimate from a hypothetical regression of the omitted intangible KIC on IT capital: KIC,KIT COV(KIC, KIT)/VAR(KIT). For example, if one
60
Jason G. Cummins
dollar of IT capital is associated with more than one dollar of omitted intangible capital, then KIC,KIT 1. Using firm-level data, Brynjolfsson and others (2000, 2002) estimate bKIT with OLS and find that each dollar of IT capital is associated with about ten dollars of market value. They interpret this finding as revealing the existence of a “large stock of intangible assets that are complementary with IT spending [emphasis added].” However, that conclusion depends on assumptions about little understood relationships. Specifically, to say anything about the value of intangible capital, one must know the return on IT capital. And to say anything about the return on intangibles or the size of the stock of intangibles, one must break the value of intangible capital into its constituent components. Brynjolfsson and others solve these problems by assuming that adjustment costs are zero, in which case the returns to IT and intangible capital are equal to unity (KIT KIC 1) and the stock of intangible capital associated with IT capital can be backed out. According to this argument, the stock market does not literally value one dollar of IT capital at ten dollars. Rather, the estimate is a “marker” for the existence of a large stock of IT-related intangibles. The second concern is empirical: the results in Brynjolfsson, Hitt, and Yang (2000) contradict the authors’ interpretation of the estimate on IT capital. When the authors add a variable that measures organizational intangibles, ORG, to the regressions, KIT is almost totally unaffected.11 If the additional variable better measures intangibles, as the authors argue persuasively, then bKIT should fall significantly because it is a marker for intangibles. Because the estimate is about unchanged, bKIT must be biased for another reason, like the stock market mismeasurement or simultaneity bias that I have highlighted. If it is biased for another reason, then one is wise to adopt an empirical technique that corrects for the bias. 2.4 Data 2.4.1 Sources and Definitions The limiting factor in our empirical analysis is the availability of data on IT outlays. For IT expenditures I use a data set compiled by Lev and Radhakrishnan (chap. 3 in this volume) from Information Week, which is in turn based on surveys by the Gartner Group. The total sample is an unbalanced panel of firms that appeared in the Information Week 500 list between 1991 and 1997 and for which Compustat and I/B/E/S data are avail11. In their subsequent paper, Brynjolfsson, Hitt, and Yang (2002) do not include the telling regression from their first paper. Instead, they interact ORG with employment. Although the interpretation of the effect of ORG is complicated in this interaction, the take-away point remains the same: The estimate on IT capital does not change significantly when ORG interacts with employment in the regression.
A New Approach to the Valuation of Intangible Capital
61
able. To be sure, this is a nonrandom sample, because the Information Week 500 contains firms with large IT budgets. However, it is worth pointing out that the biggest purchasers of IT are generally not the biggest sellers of IT. For example, General Electric and General Motors usually topped the 500 list. Given the nature of the sample selection, the empirical results should be used to draw conclusions about only the types of large firms in the sample. The variables used in the empirical analysis are defined as follows:
• V E is the sum of the market value of common equity (defined as the number of common shares outstanding multiplied by the end-offiscal-year common stock price) and the market value of preferred stock (defined as the firm’s preferred dividend payout divided by Standard & Poor’s preferred dividend yield obtained from Citibase). • Vˆ is the present value of analysts’ profit forecasts. Let it and i,t1 denote firm i’s expected profits in periods t and t 1, formed using beginning-of-period information. Let git denote firm i’s expected growth rate of profits in the following periods, formed using beginning-ofperiod information. Notice that the stock market valuation of the firm, V E, is dated at time t – 1 so that the market information set contains these forecasts. Then, to calculate the implied level of profits for periods after t 1, I allow the average of it and i,t1 to grow at the rate git. Let this average be it.12 The resulting discounted sequence of profits defines Vˆit in the following way: Vˆit it t i,t1 2t (1 git ) it 3t (1 git )2 it it (1 git )3 4t (1 git )3 it 5t
r g The constant discount factor reflects a static expectation of the nominal interest rate over this five-year period; that is, I use the Treasury bill interest rate in year t (plus a fixed 8 percent risk premium as suggested by Brealey and Myers [2000] among others). • Dt is the book value of debt, which is the sum of short- and long-term obligations. 12. In principle, the period for calculating Vˆ should be infinity. However, analysts estimate g over a period of three to five years. Thus to match the period for which information exists, I set the forecast horizon to five years. A terminal value correction accounts for the firm’s value beyond year five. The correction assumes that the growth rate for profits beyond this five-year horizon is equal to that for the economy. Specifically, I create a growth perpetuity by dividing the last year of expected earnings by (r – g) where I assume that r is the mean nominal interest rate for the sample period as a whole (about 15 percent, which includes a constant 8% risk premium) and g is the mean nominal growth rate of the economy for the sample period as a whole (about 6%).
62
Jason G. Cummins
• Ct is net current assets, essentially cash on hand. • I and K are capital expenditures and the current-cost net stock of property, plant, and equipment (both excluding IT). In constructing the current-cost stock, I follow the perpetual inventory method and use an industry-level rate of economic depreciation derived from Hulten and Wykoff (1981). • IT and KIT are IT expenditures and the current-cost net stock of IT. IT outlays are from the Information Week survey. Again, in constructing the current-cost stock, I follow the perpetual inventory method, and I use a depreciation rate consistent with annual economic depreciation of 40 percent.13 • IP and KIP are IP expenditures and the current-cost net stock of IP. IP expenditures are the sum of R&D and advertising. In constructing the current-cost stock, I once more follow the perpetual inventory method, and I use a depreciation rate consistent with annual economic depreciation of 25 percent. The estimation sample includes all firms with at least four consecutive years of complete data. Four years of data are required to calculate first differences and to use lagged variables as instruments. I determine whether the firm satisfies the four-year requirement after I delete several observations that appear to be recording or reporting errors. Also, a few observations were deleted because Vˆ 0.14 We turn now to a description of the sample (table 2.1). The first two rows of the table define the different proxies for the intrinsic value of the firm. The total value of the firm consists of three components: the return to equity holders, VE or Vˆ; the return to debt holders, D; and an adjustment for net current assets, C. At both the mean and the median values, the stockmarket-based value is about three-quarters greater than the analyst-based value. Another notable feature of the sample is that spending on IT and IP is a large fraction of total investment spending at the mean and median values. 2.4.2 A Look at Analysts’ Forecasts To lay the foundation for using the analyst-based proxy for the intrinsic value of the firm, I compare the analysts’ forecasts of long-term growth, git, 13. Economic depreciation includes the effect of technological obsolescence as well as wear and tear, so a 40% depreciation rate is reasonable given the rapid advances in the quality of IT. Estimates of IT depreciation rates are scarce, but in recent studies researchers have found that the economic depreciation rate of personal computers is about 40 percent (Doms et al. 2004). More generally, the depreciation rates adopted in the NIPAs for software are considerably higher than 40 percent, and those for communications gear are considerably lower. My choice seems balanced, and the empirical results are not materially affected by alternatives in the range of 30 percent to 50 percent. 14. The data and programs for this study are available at www.insitesgroup.com/jason.
A New Approach to the Valuation of Intangible Capital Table 2.1
Descriptive statistics for variables used in empirical analysis, 1991–97 (in millions of current dollars)
Variable (V D – C) (Vˆ D – C )b K KIT KIP I IT IP E
63
a
Mean
Standard deviation
First quartile
Median
Third quartile
12,315 7,208 5,822 922 1,726 769 223 383
23,225 15,308 10,107 2,013 4,289 1,696 461 997
2,321 1,179 734 135 0 107 35.0 0
5,086 2,942 2,051 337 292 298 81.1 53.0
12,402 7,379 6,453 802 1,304 729 200 255
Note: In this and subsequent tables, as well as in the chart, the sample contains firms with at least four years of complete data. N 253, for a total of 1,503 observations. a Stock-market-based value. b Analyst-based value.
with realizations of growth over a three-year period. My results show that analysts expected profits to grow at an annual rate of 11.3 percent for the mean firm in my sample. Over a three-year period, profits actually grew just a touch more slowly than estimated, at a rate of 11 percent. A visual comparison of actual and expected profit growth is revealing (figure 2.1). Three features of the data are apparent. First, analysts do not forecast negative long-term growth. That practice is sensible because such forecasts would be equivalent to saying that the company was essentially worthless. Second, analysts are loath to forecast exceedingly high longterm growth rates—another sensible practice. Few companies generate profit growth in excess of 30 percent, and analysts cannot easily identify ex ante those that may realize such growth. Finally, actual profit growth is highly variable. Some companies grow at fast rates or suffer large retrenchments. The OLS regression line describes the average relationship between the two variables. Actual and expected earnings growth are positively related— the slope of the regression line is 0.74 with a standard error of 0.15—but realized earnings growth often differs widely from analysts’ expectations.15 Moreover, the forecasts tend to be overly optimistic on average. In addition, analysts do not issue particularly accurate long-range forecasts; evidently, a lot can happen to a company over a three-year period, and most of what happens cannot be anticipated. However, the key requirement for my purposes is not forecast accuracy but the ability of analysts’ forecasts to capture the expected future returns on which the firm’s investment decisions are based. Judged according to this metric, analysts’ forecasts appear 15. I have left a few extreme observations out of the figure in order to maintain a 1:1 aspect ratio. However, in fitting the regression, I have included these observations.
64
Jason G. Cummins
Fig. 2.1
Actual profit growth and expected long-term profit growth, 1992–97
Note: Each dot represents a single firm-year observation. In the regression equation, AGR is actual profit growth and EGR is expected profit growth; standard errors are in parentheses.
to be reasonable and informative assessments about companies’ future prospects. 2.5 Empirical Results Empirical results appear in two stages. I present OLS estimates of the empirical valuation equations in levels and within-groups (table 2.2). After establishing that these results are consistent with the sort of bias I have described, I present the results from two GMM estimators (table 2.3). First, I present a standard estimator that takes first differences in the empirical equations and uses lagged capital stocks as instrumental variables. For reasons described in section 2.3.2, the coefficient estimates are likely to be downwardly biased in this case. Second, I present results from the systemGMM estimator. The diagnostic statistics indicate that system-GMM is well behaved when the analyst-based measure of intrinsic value is used and that the results themselves are quite sensible.
0.070 0.086 0.451
0.753 (0.075) 3.19 (0.491) 2.07 (0.211)
(V Et Dt) (1)
0.066 0.086 0.401
0.482 (0.064) 3.14 (0.416) 1.54 (0.179)
(Vˆt Dt) (2)
0.143 0.171 0.474
Diagnostic tests (p-values)
0.821 (0.064) 1.97 (0.415) 1.84 (0.179)
0.169 0.214 0.457
0.550 (0.048) 1.91 (0.316) 1.31 (0.136)
(Vˆt Dt – Ct) (4)
Dependent variable
(V Et Dt – Ct) (3)
Level
OLS estimates of the valuation equations, 1992–97
0.930 0.245 0.107
0.892 (0.216) –6.67 (0.836) 2.67 (0.685)
(V Et Dt – Ct) (5)
0.886 0.317 0.171
0.182 (0.169) –8.63 (0.656) 0.383 (0.537)
(Vˆt Dt – Ct) (6)
Within-group
Notes: Year dummies are included (but not reported) in all specifications. Robust standard errors on coefficients are in parentheses. For estimation, N 253 but we drop the first year, leaving a total of 1,250 observations. a The test for serial correlation in the residuals is asymptotically distributed as N(0,1) under the null of no serial correlation.
Serial correlationa First-order Second-order –2 R
(KIPt – IPt )
(KITt – ITt )
(Kt – It )
Table 2.2
66
Jason G. Cummins
Table 2.3
GMM estimates of the valuation equations, 1992–97 Dependent variable First-differences
System
(V Dt – Ct ) (1)
(Vˆt Dt – Ct ) (2)
(V Dt – Ct ) (3)
(Vˆt Dt – Ct ) (4)
0.399 (0.478) –12.9 (1.33) 9.72 (1.80)
0.007 (0.197) –11.3 (1.30) 3.93 (1.01)
1.75 (0.144) 0.725 (0.390) 0.652 (0.273)
0.846 (0.135) 1.72 (0.327) 0.684 (0.257)
0.883 0.326 0.000
0.644 0.463 0.073
E t
(Kt – It ) (KITt – ITt ) (KIPt – IPt )
E t
Diagnostic tests (p-values) Serial correlation First-order Second-order Sargan testa
0.656 0.345 0.047
0.634 0.488 0.360
Notes: In the first-differences estimator, the instrumental variables are the levels of the capital stocks in periods t – 3 and t – 4. In the system estimator, the valuation equation in firstdifferences is estimated jointly with the valuation equation in levels. The instrumental variables for the first-differenced equation are the levels of the capital stocks in period t – 3 and t – 4. The instrumental variables for the levels equation are the first-differences of the capital stocks in period t – 2. See also notes to table 2.2. a The test of the overidentifying restrictions, called a Sargan test, is asymptotically distributed as 2(n–p), where n is the number of instruments and p is the number of parameters.
2.5.1 OLS Results In the specification in the first column of table 2.2, the coefficient on IT capital substantially and significantly exceeds unity, as does the coefficient on IP capital. Meanwhile, the estimate of the return on tangible capital is significantly less than unity.16 According to this first pass at the data, one dollar of IT capital is associated with about two dollars of unmeasured intangibles and one dollar of IP capital is associated with about one dollar of unmeasured intangibles. Thus, my basic results parallel those reported by Brynjolfsson and others even though (a) I do not use the same firms or estimation period, (b) I use different techniques for constructing the capital stocks, and (c) I use different regressors.17 The pattern of estimates in column (1) is similar to that in column (2), 16. Recall from the theoretical model that the beginning-of-period capital stocks belong on the right-hand side of the empirical valuation equation. According to equation (2), the beginning-of-period capital stocks are equal to the difference between the current capital stock and current investment. Hence, the relevant regressors are (Kt – It) and so on. 17. I could not investigate the effects of these differences because Brynjolfsson and his collaborators declined to share their data.
A New Approach to the Valuation of Intangible Capital
67
where Vˆ replaces V E. In particular, whether one uses an analyst-based or a market-based definition of intrinsic value does not make much difference when one estimates in levels with OLS. However, the estimates on IT capital are considerably smaller in columns (3) and (4), where net current assets are accounted for in valuing the firm. Apparently, large IT capital stocks are associated with relatively abundant net current assets. Microsoft, for example, has a large stock of IT and has amassed a huge cash cushion on its balance sheet. When one ignores this relationship, the coefficient on IT capital picks up both the effect of intangibles and the omitted effect of net current assets. Thus, to develop an accurate picture of the role of IT capital, one must define the value of the firm carefully. So far the results have not controlled for unobserved heterogeneity. As a result, the estimates are difficult to interpret because the firm-specific effect is surely correlated with contemporaneous capital investments. To sweep out the firm-specific effect, I include within-group estimates presented in columns (5) and (6), which express all of the variables as deviations from within-firm means. In this case, the coefficients on IT are significantly negative in both specifications, and the coefficients on the other types of capital appear downwardly biased in the final column. These findings are not surprising because the capital stocks are highly persistent. Although unitroot tests are useless for short panels, the (unreported) first-order autoregression (AR[1]) coefficient estimates from regressions of the current capital stocks on their first lags are all greater than 0.92. In such situations, the received wisdom from the literature on production function estimation indicates that one should expect downward bias from within-group estimates.18 2.5.2 GMM Results The GMM estimates are useful because the within-group results do nothing to control for simultaneity bias. Such bias must be important because the value of the firm (no matter how it is measured) and its investment policy are jointly determined. To see the intuition behind this point, compare the empirical valuation equation with an empirical investment equation based on Tobin’s Q. In the current setup, the firm’s intrinsic value is a function of the capital stock and investment, whereas the reverse is true in an equation that relates the investment rate to Tobin’s Q. Put simply, increases in market value may cause investment in IT (and other types of capital), but the reverse may be true, too. To deal with simultaneity bias (and eliminate the firm-specific effect at the same time), I estimate the first-
18. In fact, it is not unusual for production function estimates of the capital share to go from 0.3 in levels to negative values in within-groups. By comparison, the magnitude of the bias in table 2.2 may seem surprisingly large, but one should keep in mind that production functions are estimated in logs.
68
Jason G. Cummins
differenced empirical valuation equations with GMM, using lagged levels of the capital stocks as instruments (table 2.3). Looking first at the Sargan test, we see that the p-values in columns (1) and (2) of the table do not indicate a decisive rejection of the model’s overidentifying restrictions. This result does not mean, however, that the instruments are informative. Indeed, in unreported results, I confirm that one cannot reject weak instruments when using the partial R2 or first-stage F-statistic as criteria. If the instruments used in the first-differenced equations are weak, then the results should be biased in the direction of withingroups.19 Indeed, a comparison of columns (1) and (2) of table 2.3 with columns (5) and (6) of table 2.2 shows that the direction and magnitude of the bias are similar in the first-differenced and within-group estimates. To address concerns about weak instruments, I use the system-GMM estimator in columns (3) and (4) of table 2.3. The Sargan test indicates that the model using V E is decisively rejected while the one using Vˆ is not. This result suggests that the instruments are correlated with the market’s, but not with the analysts’, mismeasurement of companies’ intrinsic values. Why might this correlation occur? As I have argued, intangibles are difficult to value. If, say, the lagged change in the stock of intangibles is correlated with the extent to which the market overstates the firm’s intrinsic value, then the system-GMM estimator will tend to be rejected. In contrast, for reasons I have discussed, we have little reason to worry that analysts’ forecast errors are correlated with the lagged change in the stock of intangibles, and the Sargan test supports this conjecture. Therefore, my preferred estimates use the analyst-based measure of the firm’s intrinsic value. In column (4), the coefficient estimates on tangible and IP capital are insignificantly different from unity (although they are significantly different from zero), and the coefficient on IT capital is significantly greater than unity. Taken at face value, the coefficient on IT capital implies that organizational capital earns a 72 percent annual rate of return, a figure that certainly appears sizable. However, three points are worth noting. First, the evidence of outsized returns is statistically weak because the 95 percent confidence interval encompasses returns as low as 7 percent. Second, in my model the return on IT capital includes the effect of adjustment costs; indeed, that is how organizational capital is defined in equation (6). This possibility is seldom noted because researchers usually estimate the return on 19. The technical explanation for this statement depends on two things. First, weak instruments will bias two-stage least squares (2SLS) in the direction of OLS. Second, the firstdifferenced GMM estimator coincides with a 2SLS estimator when the fixed effects are removed with the orthogonal deviations transformation; and OLS transformed to orthogonal deviations coincides with within-groups. Therefore, weak instruments will bias this particular 2SLS estimator (which coincides with first-differenced GMM) in the direction of withingroups.
A New Approach to the Valuation of Intangible Capital
69
IT with a static production function, which assumes that capital is in a steady-state equilibrium so that adjustment costs are zero by construction.20 Finally, closely related to the previous point, investors and analysts cannot observe organizational capital so they must infer its value. Such inferences are difficult and subject to considerable uncertainty. Therefore, caution is warranted in interpreting the coefficient estimate on IT capital as a rate of return, which is usually defined in terms of the actual profits generated by observable capital. The coefficient on IP capital is less than unity, a result consistent with earlier findings that R&D earns a somewhat less than normal rate of return (see, for example, B. Hall 1993b). Perhaps firms cannot reap the full benefit of their IP investments because of the nonexclusive nature of some types of R&D (see, for example, Griliches 1979; Jaffe 1986; Bernstein and Nadiri 1989). However, one must exercise caution in drawing such a conclusion because the 95 percent confidence interval encompasses returns as large as 20 percent, a result more in line with the recent findings in Hand (2002). Finally, the estimate on tangible capital (excluding IT) is slightly less than unity. This outcome is consistent with lower rates of return on these types of capital and with recent studies in which estimated adjustment costs are quite modest in size (see, for example, Bond and Cummins 2000a). 2.6 Conclusion The dramatic rise of the stock market in the 1990s led some observers to conclude that intangible capital was an increasingly important contributor to the bottom line at many companies. However, the abrupt and sustained decline in the stock market that began in 2000 seemed to suggest just the opposite. This reversal highlighted the desirability of alternative measurement strategies that would distinguish between the gyrations of the stock market and the value created by intangibles. My empirical approach offers such an alternative strategy and provides a different perspective about what intangibles are and how researchers can estimate their return. In my model, intangible capital is not a distinct factor of production as is physical capital or labor; indeed, I assume that intangibles, unlike a computer or a college graduate, cannot be purchased in a market. Nor are intangibles some kind of relabeled MFP. Rather, intangible capital is the “glue” that creates value from the usual factor inputs. 20. To see the implications of my approach in the context of a production function, notice that the marginal product of capital in my model is equal to the traditional user cost plus adjustment costs. For example, abstracting from taxes and setting the price of capital equal to unity, the equilibrium condition in my model is ∂ /∂KIT r KIT ∂C/∂KIT. As long as adjustment costs are positive, the estimated return on capital can exceed (r KIT ), the usual required rate of return under the equilibrium condition in production function framework.
70
Jason G. Cummins
This perspective naturally suggests an empirical model in which intangible capital is defined in terms of adjustment costs. As such, intangibles are the difference between the value of installed inputs and that of uninstalled inputs. In my empirical approach, I use two proxies for the intrinsic value of the firm, one based on the firm’s stock market value and the other based on analysts’ profit forecasts. In addition, I use a GMM estimation technique to control for unobserved heterogeneity and simultaneity bias in specifications with nearly integrated regressors. Using the analyst-based proxy and the GMM technique, I find no evidence of economically important intangibles associated with investment in intellectual property or physical capital apart from IT. However, my estimates suggest that organizational capital created by information technology generates a sizable return. These findings come with a caveat. Controlling for simultaneity bias and unobserved heterogeneity removes intangibles that may have been swept into the error term, either as omitted variables or as part of the unobservable productivity shock. Nevertheless, alternative empirical approaches are unpalatable, to say the least. Indeed, my OLS estimates seem to imply a strong role for intangibles, but they are unreliable because the value of the firm and its investment policy are jointly determined. In the end, how best to characterize the heterogeneity across firms and what role intangibles play remain open questions. Are intangibles part of the unobservable productivity shock? Are intangibles some fixed (or quasi-fixed) factor that interacts in complex ways with other inputs? The answers to these questions remain unresolved. Finally, I consider whether my approach suggests ways to incorporate intangible capital into national income accounting. At a basic level, the implications are not encouraging. Factor inputs in the national accounts have prices, but such prices are often difficult to measure accurately. In contrast, my approach starts with the assumption that intangibles are nearly impossible to value as stand-alones. In particular, intangibles have unobservable shadow prices that depend on expectations. This setup makes the return on intangibles impossible to measure directly and uncertain by construction. These two features render intangible capital particularly ill suited to national income accounting. Nevertheless, my approach does suggest a road map for improving the national accounts. A key ingredient for better understanding the scope of intangibles is detailed data on the types of outlays that are closely connected with intangibles. In this regard, the national accounts could be considerably improved. I focused on IT, R&D, and advertising, but it would be desirable to have data on other types of outlays, such as education, on-the-job training programs, and the like.
A New Approach to the Valuation of Intangible Capital
71
References Arellano, Manuel, and Stephen R. Bond. 1991. Some tests of specification for panel data: Monte Carlo evidence and an application to employment equations. Review of Economic Studies 58 (2): 277–97. ———. 1998. Dynamic panel data estimation using DPD98: A guide for users. Manuscript, Oxford University. Baily, Martin Neil. 1981. Productivity and the services of capital and labor. Brookings Papers on Economic Activity, Issue no. 1:1–50. Bernstein, Jeffrey I., and M. Ishaq Nadiri. 1989. Research and development and intra-industry spillovers: An empirical application of dynamic duality. Review of Economic Studies 56 (2): 249–69. Blundell, Richard, and Stephen R. Bond. 1998. Initial conditions and moment restrictions in dynamic panel data models. Journal of Econometrics 87 (1): 115–43. ———. 2000. GMM estimation with persistent panel data: An application to production functions. Econometric Reviews 19 (3): 321–40. Bond, Stephen R., and Jason G. Cummins. 2000a. Noisy share prices and the Qmodel of investment. Manuscript, New York University. ———. 2000b. The stock market and investment in the new economy: Some tangible facts and intangible fictions. Brookings Papers on Economic Activity, Issue no. 1:61–124. Brealey, Richard A., and Stewart C. Myers. 2000. Principles of corporate finance. 6th ed. New York: McGraw-Hill. Bresnahan, Timothy, Eric Brynjolfsson, and Lorin Hitt. 2002. Information technology, workplace organization, and the demand for skilled labor: Firm-level evidence. Quarterly Journal of Economics 117 (2): 339–76. Brynjolfsson, Eric, Lorin Hitt, and Shinkyu Yang. 2000. Intangible assets: How the interaction of information technology and organizational structure affects stock market valuations. Manuscript, MIT. ———. 2002. Intangible assets: Computers and organization capital. Brookings Papers on Economic Activity, Issue no. 1:137–81. Campbell, John Y., Andrew W. Lo, and A. Craig MacKinlay. 1997. The econometrics of financial markets. Princeton, NJ: Princeton University Press. Doms, Mark E., Wendy E. Dunn, Stephen D. Oliner, and Daniel E. Sichel. 2004. How fast do personal computers depreciate? Concepts and new estimates. In Tax policy and the economy, ed. J. Poterba, Cambridge: MIT Press. Griliches, Zvi. 1979. Issues in assessing the contribution of research and development to productivity growth. Bell Journal of Economics 10 (1): 92–116. Griliches, Zvi, and Jacques Mairesse. 1999. Production functions: The search for identification. In Econometrics and economic theory in the 20th century: The Ragnar Frisch Centennial Symposium, ed. S. Strom, Cambridge: Cambridge University Press. Hall, Bronwyn H. 1993a. Industrial research during the 1980s: Did the rate of return fall? Brookings Papers on Economic Activity, Issue no. 2:289–343. ———. 1993b. The stock market’s valuation of R&D investment during the 1980s. American Economic Review 84 (1): 1–12. Hall, Robert E. 2001. The stock market and capital accumulation. American Economic Review 91 (6): 1185–1202. Hand, John R. M. 2002. The profitability and returns-to-scale of expenditures on intangibles made by publicly-traded U.S. firms, 1980–2000. Manuscript. University of North Carolina.
72
Jason G. Cummins
Hausman, Jerry A. 2001. Mismeasured variables in econometric analysis: Problems from the right and problems from the left. Journal of Economic Perspectives 15 (4): 57–67. Hayashi, Fumio, and Tohru Inoue. 1991. The relation between firm growth and Q with multiple capital goods: Theory and evidence from panel data on Japanese firms. Econometrica 59 (3): 731–53. Hempell, Thomas. 2003. Do computers call for training? Firm-level evidence on complementarities between ICT and human capital investments. Manuscript, ZEW. Hulten, Charles R., and Frank Wykoff. 1981. Measurement of economic depreciation. In Depreciation, inflation, and the taxation of income from capital, ed. C. R. Hulten, Washington, DC: Urban Institute. Jaffe, Adam. 1986. Technological opportunity and spillovers of R&D: Evidence from firms’ patents, profits, and market value. American Economic Review 76 (4): 984–1001. Kiley, Michael T. 2000. Stock prices and fundamentals in a production economy. Board of Governors of the Federal Reserve System. Manuscript. Lev, Baruch. 2001. Intangibles: Management, measurement, and reporting. Washington, DC: Brookings Institution. Mairesse, Jacques, and Bronwyn H. Hall. 1996. Estimating the productivity of research and development: An exploration of GMM methods using data on French and United States manufacturing firms. NBER Working Paper no. 5501. Cambridge, MA: National Bureau of Economic Research. McGrattan, Ellen R., and Edward C. Prescott. 2000. Is the stock market overvalued? Federal Reserve Bank of Minneapolis. Manuscript. Nakamura, Leonard. 1999. Intangibles: What put the new in the new economy? Federal Reserve Bank of Philadelphia Business Review (July/August): 3–16. Prescott, Edward C., and Michael Visscher. 1980. Organization capital. Journal of Political Economy 88 (3): 446–61. Shiller, Robert J. 2000. Irrational exuberance. Princeton, NJ: Princeton University Press. Siegel, Jeremy. 1998. Stocks for the long run: The definitive guide to financial market returns and long-term investment strategies. 2nd ed. New York: McGraw-Hill.
3 The Valuation of Organization Capital Baruch Lev and Suresh Radhakrishnan
3.1 Introduction It is widely observed that within industries or economic sectors some firms systematically outperform their competitors. Wal-Mart in retail, Dell in personal computers (PCs), IBM in computers and information technology services, Microsoft in software, UPS and Federal Express in shipping, Goldman Sachs in investment banking, Southwest among airlines, and similarly in practically every industry. Such super-normal performance, generally manifested by sustained growth in sales, earnings, and market value, is not attributed to monopoly power or competitionconstraining regulation, but rather to the organization of the leading enterprise. Such organization is manifested by unique systems and processes employed in the investment, production, and sales activities of the enterprise, along with the incentives and compensation systems governing its human resources. This collective resource, often dubbed “organization capital,” is the major factor of production that is unique to the firm and thus capable of yielding abnormal—above cost of capital—returns, thereby generating enterprise growth. Most other factors of production, labor and capital in particular, are commodities in the modern economy, since comBaruch Lev is the Philip Bardes Professor of Accounting and Finance at Stern School of Business, New York University. Suresh Radhakrishnan is professor of accounting and information management at the School of Management, University of Texas at Dallas. We are grateful to Rashad Abdel-Khalik, Jason Cummins, Rebecca Hendersson, James Myers, Konduru Sivaramkrishnan, Shivaram Rajagopal, Theodore Sougiannis, and participants of the 2001 Strategy Conference at Harvard University, the Texas Symposium at the University of Texas at Dallas, the Berkeley Conference at the University of California at Berkeley, the National Bureau of Economic Research 2002 Conference on Research in Income and Wealth, and seminars at University of Illinois, Urbana-Champaign, and Purdue University.
73
74
Baruch Lev and Suresh Radhakrishnan
petitors have equal access to them, and therefore yield, at best, the cost of capital.1 Organization capital, in contrast, is the persistent creator of value and growth for business enterprises. It is easy to conjure examples of specific business processes and designs that make up organization capital, and even to conceptualize intuitively this resource as an extra, unmeasured factor of production responsible for abnormal firm performance.2 However, there exist no operational measures of organization capital. Such measures will be highly useful to a multitude of decision makers. Managers obviously need to track the size and growth of organization capital—the major source of competitive advantage—and benchmark it against the past (is our organization capital deteriorating?) and against rivals. Furthermore, valuing organization capital will enable managers to assess the return on investments in creating this resource, such as information technology (IT) and brand enhancement. Specifically, relating IT expenditures or brand enhancement outlays to changes in organizational capital will indicate the returns on these investments and guide overall resource allocation (invest less or more in IT?). Investors will similarly be eager to incorporate the value of organization capital in their corporate valuation models. In merger and acquisition cases, the value of organization capital should play a prominent role, since, as will be argued below, such capital is predominately tacit and difficult to transfer across firms, and hence of questionable value in acquisitions. Economic, organization, and management researchers in search of quantifying the elusive concept of “quality of management” will find an operational measure of organization capital highly instructive, since this capital essentially reflects the sum total of managerial decisions and activities. In this study, we develop a firm-specific measure of organization capital and estimate it for a large sample of publicly traded companies. We test the validity of our measure within a widely used investment valuation model and show that it contributes significantly to the explanation of differences in market values of firms, beyond the traditional value indicators of assets in place and expected abnormal earnings (growth potential). We also document that financial analysts, the major information intermediaries in capital markets, fail to fully comprehend the value of firms’ organization capital, probably because of the absence of relevant information on this re1. Even R&D yields, on average, the cost of capital. Chan, Lakonishok, and Sougiannis (2001) report that the performance of firms conducting R&D is not superior, on average, to that of firms without R&D. See also B. Hall (1993) for similar results. 2. Examples of specific business processes and designs that are components of organization capital are Wal-Mart’s supply chain, where the reading of the barcodes of purchased products at the checkout register is directly transmitted to suppliers who are in turn largely responsible for inventory management; Cisco’s Internet-based product installation and maintenance system, estimated by Cisco’s chief financial officer to have saved $1.5 billion over three years (Economist, June 26, 1999, p. 12); and Dell’s pioneering built-to-order distribution system, where customers design their products.
The Valuation of Organization Capital
75
source in corporate financial reports (e.g., no data on IT expenditures, employee training, brand enhancement activities, etc.). Section 3.2 of the paper discusses various concepts of organization capital and related research, while section 3.3 presents our methodology for measuring firm-specific organization capital and the empirical estimates. Section 3.4 examines the association between information technology—a key driver of organization capital—and our estimates of this resource; section 3.5 incorporates estimates of organization capital in a widely used valuation model to validate their usefulness, while section 3.6 concludes the paper. 3.2 What Exactly Is Organization Capital? A succinct definition of organization capital was provided by Evenson and Westphal (1995, p. 2237): “organization capital . . . [is] the knowledge used to combine human skills and physical capital into systems for producing and delivering want-satisfying products.”3 Specifically, organization capital according to Evenson and Westphal relates to the following firm attributes (the specific examples are ours): (a) firms’ operating capabilities, such as product design systems, production management and engineering (e.g., just-in-time inventory), input outsourcing (supply channels), and marketing technologies (e.g., on-line distribution channels); (b) investment capabilities, such as advanced project selection mechanisms (e.g., using real-options methodologies for project evaluation), personnel training, and financial engineering in fund raising and risk management (e.g., hedging assets, liabilities, and currency exposures with financial derivatives); and (c) innovation capabilities, such as unique research and development (R&D) procedures (e.g., a scientific approach to drug development), adaptive capacity for learning from others, communities of practice to share information among employees, as well as decision and legal procedures for appropriating maximal benefits from intellectual property (e.g., patent licensing and technology turf protection). Organization capital is thus an agglomeration of technologies—business practices, processes and designs, and incentive and compensation systems—that together enable some firms to consistently and efficiently extract from a given level of physical and human resources a higher value of product than other firms find possible to attain.4 3. In a similar vein, Atkeson and Kehoe (2002, p. 1) wrote: “At least as far back as Marshall, economists have argued that organizations store and accumulate knowledge that affects their technology of production. This accumulated knowledge is a type of unmeasured capital that is distinct from the concepts of physical or human capital in the standard growth model.” 4. Sometimes the absence of organization capital is mentioned as a potential source of future value: in an interview with Bill Miller (Barron’s, February 3, 2003, p. 26), this most successful fund manager (an average annual return of 14.5 percent over the past 10 years) said the following about Home Depot: “People are talking about its problems . . . but all these problems are getting fixed. This is a company that didn’t have any perpetual inventory, didn’t
76
Baruch Lev and Suresh Radhakrishnan
Some researchers on organization capital view this resource as embodied in employees (e.g., Jovanovic 1979; Becker 1993). Elaborating on this view, Prescott and Visscher (1980, pp. 447–48) include the following factors in organization capital: (a) “what the firm knows about the abilities of its personnel . . . improving matches between employees and jobs by measuring performance,” (b) what “the firm learns about its employees to improve the match between employees working in teams,” and (c) “the human capital of the firm’s employees.” Others view organization capital beyond that embedded in people, defining it as “a firm-specific capital good jointly produced with output and embodied in the organization itself ” (Atkeson and Kehoe 2002, p. 3). Proponents of this approach include Arrow (1962), Rosen (1972), Tomer (1987), and Ericson and Pakes (1995). In the present study, we follow the latter—firm-embodied—concept of organization capital. The competitive advantages conferred on firms by organization capital are mainly due to the fact that this resource cannot be completely codified and hence transferred to other organizations or imitated by them. As Evenson and Westphal (1995, p. 2213) note: “Much of the knowledge about how to perform elementary processes and about how to combine them in efficient systems is tacit, not physically embodied and neither codified nor readily transferable. Thus, though two producers in the same circumstances may use identical material inputs in conjunction with equal information, they may nonetheless employ what are really two distinct techniques owing to differences in understanding of the tacit elements.” The car industry exemplifies the difficulties in imitating and adopting others’ organization capital: with all that has been written about Japanese car manufacturers’ efficiency systems (e.g., just-in-time production process) and the vast efforts to imitate these systems by competitors over the last two decades (including joint Japanese–U.S. production facilities, such as the GM–Toyota Nummi plant in Freemont, California), Japanese car manufacturers are still the world leaders in efficiency, profitability, and quality.5 Clearly, the essential elements of organization capital are not transferable across firms, even over extended time periods. The partial tacitness of organization capital is among the major reasons this resource is hard to measure, at both the input and output levels. Part of the investment (input) in organization capital is not fully tracked by firms. For example, the cost of on-the-job training, particularly the mentoring of young employees by senior ones, is generally not recorded by the accounting system. Also not recorded as an investment are the extensive have any point-of-sales terminals, didn’t have pay scales for their employees, and didn’t have any centralized purchasing. It was remarkable what they didn’t have, which tells you how powerful the economics of the business were.” 5. In January 2004, Toyota Co. announced that in 2003 it had overtaken the Ford Motor Co. in worldwide car sales, and thus captured second place, behind General Motors, among car leaders.
The Valuation of Organization Capital
77
efforts of employees to better educate themselves and improve the efficiency of firms’ production, research, and selling processes (the “suggestions box,” for example). In general, the smaller the enterprise, the less likely that the accounting system will systematically track and record all the investment in organization capital. Consequently, firms and investors lack reliable input (cost) measures of organization capital. Nor is the output of organization capital easy to quantify. Such output—business designs and processes, for example—is essentially an intermediate product without a market price (see Aghion and Howitt, 1998, chap. 12, for discussion of difficulties in measuring knowledge output). Moreover, the accounting system does not segregate the contribution of organization capital to the firm’s final output (sales, profits) from the contribution of other inputs, and therefore accounting is not equipped to value organizational capital. Finally, to cap the measurement difficulties, the valuation of organization capital requires an estimate of its rate of obsolescence. The emergence of new systems and processes to replace those currently in use (e.g., Internetbased supply channels), along with imitation of elements of organization capital by competitors (J.C. Penney recently implemented a supply-chain system similar to Wal-Mart’s) reduces the value of the firm’s organization capital. Yet reliable estimates of the obsolescence rate of organization capital are not available. Thus, given the daunting challenges in measuring organization capital, it is not surprising that widely accepted measures of this important corporate resource are not available. 3.3 Estimating Organization Capital 3.3.1 Methodology We model the firm’s output, or sales (denoted by SALE), as a function of its major inputs: physical capital (PPE: property, plant, and equipment), labor (EMP: number of employees), and R&D capital (RND), where RND represents the firm’s innovative activities, that is, its intangible assets. The following function, which assumes constant returns to scale (as in R. Hall 2000), is used to estimate organization capital: (1)
SALEit a0it PPEbit1tEMPbit2tRNDbit3teit,
where SALEit is the revenue of firm i in year t, a0it stands for organization capital, PPEit is the net value of plant, property, and equipment, EMPit is the number of employees, RNDit is the firm’s R&D capital (the latter three variables are valued at year-end), and eit is an error term.6 It is possible to estimate organization capital from the residual of 6. Note that the physical (PPE) and intangible (RND) inputs in (1) are measured in monetary values, whereas labor (EMP) is measured by the number of employees. This is due to the absence of a reliable, publicly reported monetary measure of firm-specific human capital. This is, in fact, an advantage, since the presence of the number of employees in expression (1)
78
Baruch Lev and Suresh Radhakrishnan
expression (1), either by extracting the systematic component of eit, or by inserting firm-specific dummy variables in equation (1). This approach was followed in the early attempts to estimate total factor productivity (TFP) in macroeconomic growth models (Solow 1957).7 The downside of estimating TFP, or our firm-specific organization capital, from the residual of a growth model is that a residual estimate is essentially a black box, reflecting random shock and various omitted variables along with organization capital, or TFP. This led Moses Abramovitz to his oft-mentioned characterization of residual-based TFP estimate as “a measure of our ignorance.” Furthermore, it has been shown (Griliches and Mairesse 2000) that a residual estimation of TFP leads to downward-biased coefficient estimates. Accordingly, we have chosen to estimate the effect of organization capital on output by using the firm’s reported sales, general, and administrative (SGA) expenses as a proxy for organization capital.8 This major income statement item includes most of the expenditures that generate organization capital, such as IT outlays, employee training costs, brand enhancement activities, payment to systems and strategy consultants, and the cost of setting up and maintaining Internet-based supply and distribution channels. For example, PepsiCo’s SGA expenses of $7.9, $8.1, and $8.5 billion in 2000, 2001, and 2002, respectively include advertising expenses, sales incentives, expenditures to support global and domestic affiliates, and expenditures on logistics of distribution systems (direct-store-delivery systems). Obviously, such brand enhancement and organization design outlays/investments are aimed at enhancing PepsiCo’s organization capital. To be sure, not all of the firm’s SGA expenses enhance organization capital—PepsiCo’s SGA includes executives’ compensation, for example— but it is reasonable to assume that most of the expenditures aimed at creating and enhancing organization capital are included in the firm’s SGA expenses. Specifically, we model the organization capital variable in (1), a0it , as follows: (2)
log(a0it ) b0t b0st log(SGA it ),
where SGA is the firm’s sales, general, and administrative expenses in year t. We allow for two types of organization capital: (a) an economywide, shifts the value of organization capital embedded in employees (particular skills and incentives) to the estimate of the firm’s organization capital, a0it , where it belongs. 7. In the basic growth model, the growth rate of GDP is related to capital, labor, and a residual, generally believed to reflect the economy’s technology or innovation capabilities. 8. A recent McKinsey Global Institute (2002) study of the performance of 1000 companies during an eighteen-year period spanning 1982 to 1999 concludes that leaders tend to expand their SGA costs significantly above average during recessions as well as increasing their R&D spending. That is, successful companies switch to an attack mode during bad times, because competition is weakened, and do this partially by expenditures reflected in the item SGA expenses. This is an example of building organization capital that helps companies perform better than average consistently.
The Valuation of Organization Capital
79
common organization capital (b0t ), which is available to all firms (e.g., a certain level of population education, the prevailing legal and institutional setting, etc.), and (b) a firm-specific organization capital (b0st log[SGA it ]), which is developed and maintained by each firm (e.g., coded knowledge, production blueprints, business processes and procedures, marketing networks and channels, etc.). A firm’s SGA is determined by (a) the level of the firm’s activity, as captured by its output (SALE), and (b) the committed portion of expenditures (e.g., multiyear employee training programs), captured by the lagged value of SGA. The level of output affects SGA because as the level of activity increases its business processes and practices need to be scaled up to accommodate the delivery of products and services for the larger base of customers. The committed portion of SGA reflects the adjustment costs involved in making changes in business processes and practices which are not instantaneous. To accommodate the endogeneity of SGA we model it as follows: (3)
log(SGA it ) g0t g1t log(SALEit ) g2t log(SGA it1) log(uit).
We estimate expressions (1) and (3) by taking logarithms of annual changes, after substituting expression (2) into expression (1): (4)
SALEit SGAit PPEit log b0t b0st log b1t log SALEi,t1 SGAi,t1 PPEi,t1
EMPit RNDit b2t log b3t log EMPi,t1 RNDi,t1
SGA SALE SGA log g b log b log SGA SALE SGA u log . u eit log . ei,t1
(5)
it
it
0t
1t
it1
1t
i,t1
i,t1
i,t2
it
i,t1
We estimate expressions (4) and (5) using the two-stage least squares procedure cross-sectionally for each sample year (1978–2002) for all firms listed on the Compustat database that operate in twelve major industry categories and that have the required financial data (see appendix for the industry classification).9 Our sample consists of all firms with both annual sales and total assets 9. We use the Fama and French (1988, 1997) industry classification, which is widely applied in finance research, since in the next section we use capital market values to validate our organization capital estimates.
80
Baruch Lev and Suresh Radhakrishnan
greater than $5 million, to avoid insignificant enterprises. Data for each sample firm on sales (Compustat data item no. 12); plant, property, and equipment (PPE; no. 8); number of employees (no. 29); annual R&D expense (no. 46); and SGA expense (no. 132) are obtained from the Compustat Annual Database. We estimate research and development capital (RND) by capitalizing and amortizing the annual research and development expenditures (R&D) over five years (a 20% annual amortization rate),10 and we set the R&D expense to zero when data were not available on Compustat. We generate our estimates for each sample year for two groups of firms—those with R&D expenditures and those without R&D expenditures—to investigate differences in organization capital between firms that formally invest in R&D and those that do not. The sample contains 57,258 non-R&D and 32,979 R&D firm-year observations, spanning the period 1978–2002. Panel A of table 3.1 provides descriptive statistics of the variables in expressions (4) and (5), whereas panel B presents the means and respective t-values of the 300 estimates (twenty-five years spanning 1978–2002 and twelve industry groups each year) of expression (4) for the R&D and nonR&D firms. It is evident from panel A that, at the mean, R&D firms are larger than non-R&D ones, although at the median the reverse is true. This indicates the presence in the sample of some very large R&D firms (e.g., General Electric, Microsoft, Pfizer).11 Panel B indicates that, for both R&D and non-R&D firms, the logarithm of growth in the common, economywide organization capital (i.e., the intercept) is 0.03, which represents approximately 3 percent of average output growth. This is consistent with the aggregate Divisia index estimates in R. Hall (2000). For the R&D (nonR&D) firms, the marginal productivity of plant, property, and equipment (b∗1 in expression [3]) is 0.08 (0.10), the marginal productivity of human capital (b∗2 ) is 0.33 (0.15), and the marginal productivity of research and development (b∗3 ) is 0.09 (where the asterisk indicates the mean coefficient estimate across all years and industries). The difference between the marginal productivities of PPE for the subsamples of R&D and non-R&D firms is not significant at the P 0.01 percent level, suggesting that R&D does not appreciably affect the marginal productivity of physical assets. In contrast, the difference between the marginal productivities of employees (EMP) across the R&D and non-R&D subsamples is statistically significant at the P 0.01 level, suggesting that R&D enhances the effi10. Most empirical estimates of the amortization rate of R&D yield an annual rate of 15– 20 percent (e.g., Lev and Sougiannis 1996). 11. In table 3.1, panel A, for firms with R&D: the maximum (minimum) SALE is that of General Motors (Human Genome Sciences), PPE is that of Nippon, Telephone and Telegraph (Abiomed Inc.), EMP is that of General Motors (Franklin Telekom), SGA is that of IBM (Vacu Dry), and RND is that of General Motors (Vacu Dry); and for firms without R&D: the maximum (minimum) SALE is that of Wal-Mart (Lovelady Ike), PPE is that of Deutche Telekom (Washington Homes), EMP is that of Wal-Mart (Lovelady Ike), SGA is that of Wal-Mart (Turf Paradise).
1,935 753 11 274 189
Mean
5 1 .05 1 .19
Min.
(continued )
180,557 111,921 876 22,977 24,562
1,074 445 7 163
Mean
5 1 .04 1
Min.
11.15 6.86 5.55 13.38 10.09 77.67% 7.20 4.25 64.07 78.36%
.01 .21 .71
t-statistic
.00 .00 .00
.00 .00 .00 .00 .00
P
.01 .18 .78
.03 .58 .10 .15
Coefficient
B: Mean estimates of expressions (4) and (5)
715 175 5.60 119 56
Max.
A: Descriptive statistics
Third quartile
With R&D
134 26 1.19 29 13
Median
.03 .41 .08 .33 .09
Coefficient
37 6 .32 9 3
First quartile
With R&D
Estimating organization capital
Expression (4) Intercept log(SGAt /SGAt–1) log(PPEt /PPEt–1) log(EMPt /EMPt–1) log(RNDt /RNDt–1) R2 Expression (5)a Intercept log(SGAt–1/SGAt–2 ) log(SALEt /SALEt–1) R2
Variable SALE PPE EMP SGA RND
Table 3.1
6.42 4.31 76.69 67.27%
74.32%
3.01 13.96 9.07 17.51
t-statistic
Without R&D
46 8 .36 8
First quartile
163 37 1.36 24
.00 .00 .00
.00 .00 .00 .00
P
Median
Without R&D
581 174 4.60 82
Third quartile
217,799 70,152 1,383 36,713
Max.
96 90
Mean
(continued)
1,226 974
Standard deviation
–40,770 –32,080
Min.
1 –1
First quartile
C: Descriptive statistics of the organization capital estimatesa
6 8
Median
35 46
Third quartile
72,440 65,073
Max.
Notes: The numbers in panels A and B are in millions of dollars, other than EMP, which is in hundreds of employees. Expressions (4) and (5) are estimated annually for each of the 12 industry groups defined in appendix A using the two-stage-least-squares procedure. The sample contains 57,258 (non-RND) and 32,979 (RND) firm-estimation year panels, spanning 1978–2002. SALE is the net revenues; PPE is the plant, property, and equipment net of depreciation; EMP is the number of employees; RND is the research and development capital computed by capitalizing and amortizing R&D expenditures over five years; SGA is the selling, general, and administrative expenses. a Estimates of the mean coefficient and the associated t-statistics of the 300 regressions, one for each of the 25 years and 12 industry groups in each year.
Variable OC SALEit – SALEi,t–1
Table 3.1
The Valuation of Organization Capital
83
ciency of employees, apparently through process R&D. The marginal productivity of SGA (b∗0s ), our proxy for organization capital, is 0.41 and 0.58 for R&D and non-R&D firms, respectively, where the difference in the marginal productivities is significant at the P 0.01 level. This suggests that non-R&D firms sustain their competitive edge through organizational processes and designs, generated by SGA expenditures, to compensate for the absence of R&D. 3.3.2 Deriving Firm-Specific Estimates We now derive firm-specific estimates of the annual contribution of organization capital to output growth by transforming the coefficient estimates of expression (4) into monetary values. Specifically, we define two expectations of firm’s output (sales) from expression (4): the expected output of firm i in year t with and without the common and firm-specific organization capital, estimated by the proxy SGA.12 From expression (4), the expected output of firm i in year t with organization capital is as follows:
EMP RND b∗ log b∗ log , EMP RND
SGAit PPEit (6) SALE∗it SALEi,t1 exp b∗0t b∗0st log b∗1t log SGAi,t1 PPIi,t1 it
it
2t
3t
i,t1
i,t1
where b∗nt for n 0, 0S (S for specific organization capital) and 1, 2, 3 are the coefficient estimates obtained from the annual cross-sectional estimation of expression (3) for each industry described above. The expected output of firm i in year t without the effect of organization capital is as follows:
RND b∗ log . RND
PPIit EMPit (7) SALE∗∗ SALEi,t1 exp b∗1t log b∗2t log it PPIi,t1 EMPi,t1
it
3t
i,t1
Given the two output expectations, expressions (6) and (7), our firmspecific measure of organization capital (OC) is the difference between expected sales with and without organization capital: (8)
OCit SALE∗it SALE ∗∗ i,t1 ,
where SALE∗it and SALE∗∗ are defined by equations (6) and (7), respecit tively.13 Note that the annual estimate of OC in equation (8) is not the total 12. We used the expected SGA using our estimates of expression (5) and obtained qualitatively similar results. 13. It is possible to estimate organization capital (OC) by subtracting expected sales without OC, expression (5), from the firm’s actual sales (SALEit), rather than from expected sales with OC (expression [4]). The drawback of such an estimate is that the consequent OC mea-
84
Baruch Lev and Suresh Radhakrishnan
value of the firm’s organization capital but is rather the annual contribution of organization capital to output growth. Table 3.1, panel D (top row), provides descriptive statistics for the organization capital (OC) estimate. The mean OC is $96 million, representing about 4 percent of average sales (SALE in table 3.1, panel A). The estimate of 4 percent average contribution of organization capital to output is very close to Atkeson and Kehoe’s (2002, table 1) estimate of the share of the economy’s organization capital in aggregate output, which ranges between 2.7 percent and 4.0 percent. Note also that somewhat less than 25 percent of the sample firms have negative OC values, indicating that organization capital can be counterproductive in certain years. Finally, since the mean annual change in sales is $90 million for our sample (table 3.1, panel D), the estimated average contribution of organization capital to sales growth—$96 million—is almost 100 percent, indicating the importance of organization capital in generating output growth.14 3.3.3 Firm-Specific Examples To provide intuition and insight into our organization capital measure, table 3.2 presents the underlying data and final estimates for IBM, covering the period 1994–2000. Panel A reports the values of the variables in expression (3): sales (SALE, the dependent variable), as well as the independent variables—SGA; PPE; number of employees (EMP in thousands); and R&D capital (RND).15 Panel B reports the annual growth rates (log of successive yearly values) of the variables in panel A. Panel C provides the annual cross-sectional estimates of the coefficients of expression (3), run for the firms in the industry in which IBM operates.16 Note the relative stability of the firm-specific organization capital coefficient, b0st . Panel D derives the estimates of IBM’s annual contributions of organization capital (details in notes to table 3.2): columns (B) and (D) provide for each year the log of predicted sales growth, with and without organization capital, respectively; that is, using expressions (6) and (7) with the coefficients reported in panel C, multiplied by the respective logs of the growth variables in panel B. Columns (C) and (E) of panel D provide the antilogs of columns
sure includes the noise in the regression residual, whereas our estimate in expression (6) eliminates the residual. 14. In table 3.1, panel C, the maximum (minimum) organization capital estimate is that of Total Fina in 2000 (Honda Motors Ltd. in 2002), and the change in sale is that of Total Fina in 2000 (Mitsui and Co. in 1995). 15. The period covered (1994–2000) marks the remarkable recovery of IBM under then chief executive officer (CEO) Lou Gerstner, who took the helm in late 1993 of the thenstruggling company. Data in panel A tell part of the recovery story: a continuous increase in sales (from $62.7 billion to $87.5 billion) while cutting expenses (except for payroll and capital expenditures). 16. These computer industry estimates (SIC 7370) can be compared with the overall sample (and time) estimates of expression (4) in panel B of table 3.1.
Table 3.2
Illustration of organization capital computation for IBM A: Fundamental data
Year
SALE
SGA
PPE
EMP
RND
1994 1995 1996 1997 1998 1999 2000
62,716 64,052 71,940 75,947 78,508 81,667 87,548
23,840 20,279 20,448 21,508 21,511 21,708 21,854
17,521 16,664 16,579 17,407 18,347 19,631 17,590
256 220 225 241 269 291 307
14,504 12,960 13,625 12,934 12,829 13,039 13,351
B: log(growth) Year
log(SGAt /SGAt–1)
log(PPEt /PPEt–1)
log(EMPt /EMPt–1)
log(RNDt /RNDt–1)
1995 1996 1997 1998 1999 2000
–.1618 .0083 .0505 .0001 .0091 .0067
–.0501 –.0051 .0487 .0526 .0676 –.1098
–.1531 .0247 .0656 .1132 .0771 .0546
–.1126 .0500 –.0520 –.0081 .0162 .0236
C: Estimates of expression (4) Year
b0t
b0st
b1t
b2t
b3t
1995 1996 1997 1998 1999 2000
.07 .01 –.01 –.02 .01 .02
.30 .31 .24 .25 .31 .27
.10 .10 .09 .10 .12 .09
.39 .34 .36 .33 .34 .32
.08 .08 .07 .08 .09 .11
D: Computations
Year
SALEt–1 (A)
log(predicted sales growth with OC) (B)
Predicted sales with OC (C)
log(predicted sales growth without OC) (D)
Predicted sales without OC (E)
Organization capital (F) (C) – (E)
1995 1996 1997 1998 1999 2000
62,716 64,052 71,940 75,947 78,508 81,667
–.0486 .0247 .0259 .0181 .0479 .0293
59,741 65,655 73,831 77,333 82,361 84,093
–.0628 .0061 .0129 .0046 .0122 –.0055
58,899 64,447 72,872 76,301 79,476 81,221
842 1,208 959 1,032 2,885 2,872
Notes: The numbers in panel A are in millions of dollars, other than EMP, which is in thousands of employees. The numbers in panel B are the logarithm of growth of the respective variables (e.g., –.0501 log(16,664/17,521). Expressions (4) and (5) are estimated annually for each of the 12 industry groups defined in appendix A using the two-stage-least-squares procedure. Panel D, column (B) b0t b0st log(SGAit /SGAi,t–1 ) b1t log(PPEit /PPEi,t–1) b2t log(EMPit /EMPi,t–1) b3t log(RNDit /RNDi,t–1), where estimates of bjt are given in panel C. Panel D, column (C) exp{column (B)}SALEt–1. Panel D, column (D) b1t log(PPEit /PPEi,t–1) b2t log(EMPit /EMPi,t–1) b3t log(RNDit /RNDi,t–1), where estimates of bjt are given in panel C. Panel D, column (E) exp{column (D)}SALEt–1. Panel D, column (F) column (D) – column (E). See table 3.1 notes for variable definitions.
86
Baruch Lev and Suresh Radhakrishnan
Fig. 3.1
Example of Xerox
(B) and (D), namely the conversion to monetary values of IBM’s predicted sales (with and without organization capital). Finally, the annual contribution of organization capital to IBM’s output is reported in the rightmost column of panel D, as the difference between columns (C) and (E). The organization capital data show a strong and sustained improvement in IBM’s organization capital during 1995–2000, corroborating the highly successful turnaround of IBM. Interestingly, of the overall sales growth of IBM over that period, amounting to $19 billion (81.7–62.7), organization capital contributed $10 billion (the sum of the annual contributions of organization capital), or 50 percent. Since the contribution of organization capital is reflected in the firm’s performance (e.g., sales and earnings growth), the question arises whether the measurement of organization capital adds useful information beyond that in sales or other financial data. This question is addressed comprehensively in the next section, but the following case of Xerox Corp., which fell on hard times in the late 1990s, provides a first inkling. Figure 3.1 presents certain financial and organization capital data for Xerox. The line and numbers in bold are based on our measure of organization capital (the number is the organization capital divided by 100). The dotted line and associated numbers represent the stock price adjusted for stock splits. The firm’s sales (not shown in figure 3.1) increased continuously through 2000, as did net income (the latter interrupted by a few years of losses, mainly due to corporate acquisition costs and other accounting writeoffs). As shown in figure 3.1, Xerox stock followed suit with an almost sevenfold increase
The Valuation of Organization Capital
87
in price over the 1988–2000 period. Alas, the stock price collapsed in 2001– 2, primarily as a reaction to reported losses and accounting scandals. It is clear that investors were totally surprised by Xerox’s collapse. Xerox’s organization capital, however, exhibits a different pattern. The annual contributions of organization capital to output (OCt ) spanned $700– 1,200 million throughout 1988–97 (with the exception of 1991–94, when OC contributions were close to $100 million). From 1998, however—two years before the downturn in sales and the stock price—Xerox annual organization capital contributions decreased sharply to about $700 million in the following three years. Thus, our organization capital measure provided a two-year advance warning of Xerox travails, relative to its financial reports and stock market investors and analysts. Thus, as demonstrated by the Xerox case and confirmed over the entire sample in section 3.5, the information conveyed by the organization capital measure is not subsumed by the contemporaneous sales and earnings data of the companies; rather, it provides a more timely indication about the firm’s economic conditions and performance. 3.4 Organization Capital and Information Technology Information technology (IT) is a major enabler of organization capital. Effective business processes and practices, such as Internet-based supply and distribution channels, production design and control systems, or riskhedging mechanisms, rely heavily on the IT infrastructure. Indeed, Brynjolfsson and Yang (1999) explain their unexpectedly large estimate of the contribution of IT to firm value—about $10 of value for $1 of IT investment—by postulating that IT expenditures proxy in their regressions for the unmeasured organization capital, which is the intrinsic contributor of firm value. Accordingly, if our measure captures organization capital, it should be correlated with IT expenditures.17 Public companies, however, do not identify their expenditures on IT in financial reports. Therefore, for a subsample of firms, we obtained annual data on firm-specific IT expenditures from Information Week reports published in its IT 500 survey.18 To quantify the association between OC and IT expenditures, we estimate the following regression: (9)
log(OCit ) a0 a1 [Year dummies] a2MKSit a3 log(ITi,t1) eit,
17. Early in the ongoing debate about IT’s contribution to productivity and growth, Robert Solow commented: “I see IT everywhere, except in the productivity data” (Gordon 2000). Subsequent studies (e.g., Gordon and Baily 1993) recorded substantial IT contribution, but only in a few industries, particularly computers and software. Others (e.g., Brynjolfsson and Hitt 1996) reported a substantial, widespread contribution. 18. These surveys were published during 1991–97. Information Week ceased to publish the IT expenditure data from 1998 onward.
88
Baruch Lev and Suresh Radhakrishnan
where OCit is firm i’s organization capital contribution in year t (1991–97), ITit is firm i ’s information technology expenditures in year t, and MKSit is firm i’s market share in year t, measured as the percentage of firm i’s sales relative to the total sales of all the firms in its industry group (see appendix). We incorporate the market share indicator in expression (9) as a control variable. Firms with effective organization capital will capture significant market share. Thus, the presence of the firm’s market share in equation (9) controls for certain missing variables that affect organization capital, thereby allowing us to focus on the unique contribution of IT to OC. We use the information technology expenditures in the preceding year (ITi,t–1 ) to assess whether spending on IT contributes to OC in a sustained fashion. We also estimate expression (10), which is a variant of expression (9), by using the preceding year’s IT expenditures as well as the current year’s change in IT expenditures over the preceding year.19 (10) log(OCit ) a0 a1[Year dummies] a2MKSit a3 log(ITi,t1) a4 [log(ITit ) log(ITi,t1 )] eit , where all variables are defined as in expression (9). Table 3.3, panel A, provides the descriptive statistics of IT, OC, and market share (MKS) for the Information Week sample. The mean (median) IT expenditure is $192 ($78) million, with a minimum of $1 million and a maximum of $4.3 billion. Thus, the sample includes a cross section of companies that span a broad range of IT expenditures. Similarly, the mean (median) MKS is 5.8 percent (2.8 percent), with a minimum of 0.006 percent and a maximum of 75 percent, indicating that the sample includes a cross section of companies vis-à-vis market share. Table 3.3, panel B, provides the Spearman rank correlations among OC, IT, lagged IT, and MKS. Our measure of organization capital is highly correlated with both the current IT spending as well as the lagged IT spending (correlation coefficients of 0.54 and 0.51 for current and lagged IT expenditures, respectively). This provides initial evidence that IT spending is associated with building organization capital. Table 3.3, panel C, provides estimates of expressions (9) and (10). The coefficient of MKS is, as expected, positive and highly significant. The coefficients of both the prior year’s IT expenditures and the current year’s change in IT expenditures are positive and highly significant, indicating that information technology is a major contributor to organization capital, as captured by our measure. Table 3.3, panel D, provides the estimates of expressions (9) and (10) when the sample is partitioned into companies with yearly high, medium, and low market-to-book values. The marketbook ratio, namely the ratio of the firm’s value in the capital market to its 19. We do not include the current value of information technology expenditure in expression (9) because ITit and ITi(t–1) are highly correlated (see table 3.3, panel B).
Table 3.3
Organization capital and information technology A: Descriptive statistics
Mean
Standard deviation
Min.
First quartile
Median
Third quartile
Max.
441 192 5.86
1,672 352 8.92
–9,656 1 .01
31 33 1.29
181 78 2.84
553 184 6.48
17,782 4,261 74.46
Variable OC ($ millions) IT ($ millions) MKS (%)
B: Correlations
ITt ITt–1 ITt – ITt–1 MKSt
OC
ITt
ITt–1
ITt – ITt–1
.54 .51 .04 .42
.92 .02 .47
–.28 .48
–.04
C: Organization capital and IT expenditures Expression (9)
log(MKSt )[2] log(ITt–1)[3] log(ITt ) – log(ITt–1[4 ] Adj. R 2
Expression (10)
Coefficient
t-statistic
P
Coefficient
.42 .23
10.59 7.53
.00 .00
.37 .31 .28
t-statistic 9.11 9.17 4.46 32.76%
31.17%
P .00 .00 .00
D: Organization capital and IT expenditures—Partitioned by market-to-book of equity Expression (9) Market-to-book value Low log(MKSt )[2] log(ITt–1 )[3] log(ITt ) – log(ITt–1)[4 ] Adj. R 2 Medium log(MKSt )[2] log(ITt–1)[3] log(ITt ) – log(ITt–1)[4 ] Adj. R 2 High log(MKSt )[2] log(ITt–1)[3] log(ITt ) – log(ITt–1)[4 ] Adj. R 2 (continued )
Expression (10)
Coefficient
t-statistic
P
Coefficient
.56 .16
7.49 2.97
.00 .00
.55 .18 .11
7.29 3.14 1.03 33.51%
.00 .00 .31
.00 .00
.12 .46 .49
1.58 7.57 3.68 34.48%
.11 .00 .00
.00 .00
.40 .31 .33
6.22 5.48 3.27 38.17%
.00 .00 .00
33.24% .21 .34
2.73 6.52 28.04%
.48 .21
7.71 4.33 35.90%
t-statistic
P
90 Table 3.3
Baruch Lev and Suresh Radhakrishnan (continued)
Notes: The numbers in Panel B are the Spearman rank correlation coefficients. The low, medium, and high market-to-book value of equity groups contain the bottom, middle, and top one-third of the observations sorted each year based on the market-to-book values of equity, respectively. The Information Week sample contains 1,818 firm-years, spanning 1991–97. OCit is the abnormal output computed as the predicted value of sales obtained by estimating expression (4) and the predicted value of sales with asset productivities alone in estimates of expression (4), i.e., expression (8). MKSit is the percentage of market share of firm i in year t computed for the 12 industry groups, MKSimt SALEimt / Σj SALEjmt where firm i belongs to the industry group m and the sum of sales is over all firms in the industry group m. ITit is the information technology expenditure of firm i in year t as reported in the Information Week 500 survey.
value on the balance sheet—the latter reflecting primarily physical and financial assets—conveys investors’ assessment of the firm’s assets that are missing from its balance sheet (primarily intangible assets). Accordingly, high market-to-book value companies are intangible intensive—that is, rich in organization capital. The estimates of expressions (9) and (10) in panel D indeed show that IT expenditures are much more strongly associated with organization capital—higher values of the IT regression coefficients—for the group with high market-book ratios than for those with low market-book ratios, providing additional validity for our measure of organization capital. 3.5 Organization Capital and Equity Valuation Having developed an estimate of organization capital, we now examine its validity by incorporating the measure in a widely used equity (stock) valuation model. This model (Preinreich, 1938; Ohlson 1995) relates the firm’s stock price to its assets in place (property, plant, inventory, etc., minus liabilities) plus its growth potential, which is measured by the firm’s future abnormal earnings. These are the earnings in excess of the required rate of return on assets (cost of capital). This model thus relates the value of the firm to its existing assets plus the future abnormal earning, or growth potential, expected from these assets: (11)
Vit BVit RE it ,
that is, Vit BVit REit ,
where Vit is firm i’s market value at the end of year t, BVit is its book value (balance sheet value of assets minus liabilities) at the end of year t, representing assets in place, and REit is firm i’s present value of abnormal earnings at the end of year t (growth potential). The right-hand rearrangement of expression (11) shows that the difference between market value and the value of assets-in-place is equal to the present value of abnormal earnings.20 Section 3.2 mentions organization capital as an unmeasured resource, 20. The familiar Tobin’s q ratio, used to assess a firm’s investment opportunities, relates the firm’s market value to replacement cost of assets. The latter is usually proxied by book value
The Valuation of Organization Capital
91
since it is not reflected in firms’ balance sheets and is therefore absent from BV in equation (11). Hence, if our measure of organization capital captures important elements of the firm’s future abnormal earnings potential, then it should be associated with the difference between the market and the book value of equity (expression [11]). Accordingly, our first validation test of the measure of organization capital is to estimate the following regression:
(MVit BVit ) Year dummies OCit (12) q0 q1 b1 eit, SALEit SALEit SALEit where MVit is firm i’s market value of equity three months after the end of fiscal year t, BVit is firm i’s book value of equity at the end of year t, OCit is the firm’s contribution of organization capital to sales (expression [8]), and SALEit is the sales of firm i in fiscal year t.21 We scale (deflate) all the variables by SALE to control for size effects.22 Note that MV and BV in expression (12) are stock variables, whereas OC is a flow variable (contribution of organization capital to annual sales). The estimate of b1 obtained from expression (12) will therefore indicate the horizon over which investors capitalize organization capital. The market value of equity is computed as the stock price multiplied by the number of common shares outstanding, obtained from the Center for Research on Stock Prices (CRSP) database. Financial data on book value (Compustat data item no. 11) were obtained from Compustat. Our estimations omitted firms with negative book value of equity. The sample contains 44,073 firm-year observations, spanning 1978–2002. Table 3.4, panel A, provides descriptive statistics; panel B presents the estimates of expression (12). The adjusted R2 value for the regression is approximately 24 percent, which indicates that OC by itself explains a quarter of the cross-sectional variation in the difference between market and book values of equity. This suggests that OC is strongly related to the firm’s capacity to generate future abnormal earnings, or growth. The coefficient on OC is 2.26, indicating that the contribution of organization capital in a given year to the present value of future abnormal earnings lasts for about three years, using a discount rate of 12 percent (i.e., [1/(1.12)] [1/(1.12)2] [1/(1.12)3] 2.40). This is consistent with the observation in the Xerox example (figure 3.1) that OC leads the stock price by about two years. Table 3.4, panel C, provides the estimates of expression (12) for the high, of assets, given difficulties in estimating replacement costs. The right-hand arrangement of expression (11) is Tobin’s q as a difference, rather than a ratio. The gap between market and book value, representing investment opportunities in the q theory, is manifested in expression (11) by REit , the present value of abnormal earnings. 21. The market value of firm i, MVit , is measured three months after the end of the year to allow stock prices to reflect the financial results of year t, which have to be publicly disclosed during the ninety days after the end of the year. 22. See Brown, Lo, and Lys (1999) for the appropriateness of using sales as the scaling variable.
Table 3.4
Organization capital and equity values A: Descriptive statistics Mean
Variable MV ($ millions) BV ($ millions) RE ($ millions) OC ($ millions) r (%)
1,521 691 124 145 12
Standard deviation
Min.
First quartile
Median
Third quartile
Max.
8,146 2,423 759 1,527 .04
2 1 –7,163 –4,441 8
65 46 –8 1 9
201 126 2 15 12
739 412 49 74 15
512,833 79,722 32,399 91,773 31
B: Contribution of organization capital to equity valuation—Whole sample Expression (12) Coefficient OC (b1) Adj. R 2
2.26
t-statistic
P
12.85 23.91%
.00
C: Contribution of organization capital to equity valuation— Partitioned by market-to-book value of equity Expression (12) Market-to-book value Low OC (b1) Adj. R2 Medium OC (b1) Adj. R2 High OC (b1) Adj. R2
Coefficient
t-statistic
P
.35
8.95 11.34%
.00
1.65
8.01 49.38%
.00
4.67
10.05 54.55%
.00
D: Comparing organization capital’s and analysts’ residual earning’s contribution to valuation—Whole sample Expression (14) Coefficient OC (b1) RE (b 2) Adj. R2
2.02 1.17
t-statistic 29.42 15.42 32.65%
Expression (15) P
Coefficient
.00 .00
1.32
t-statistic
14.06 24.73%
P
.00
The Valuation of Organization Capital Table 3.4
93
(continued)
E: Comparing organization capital’s and analysts’ residual earning’s contribution to valuation— Partitioned by market-to-book value of equity Expression (14) Market-to-book value Low OC (b1) RE (b2 ) Adj. R2 Medium OC (b1) RE (b2 ) Adj. R2 High OC (b1) RE (b2 ) Adj. R2
Coefficient
t-statistic
Expression (15) P
Coefficient
t-statistic
P
.32 .41
7.15 8.25 24.79%
.00 .00
.45
9.61 15.09%
.00
1.52 .64
17.18 7.58 52.30%
.00 .00
1.02
9.34 22.50%
.00
3.15 2.62
21.39 24.71 55.25%
.00 .00
3.73
30.59 32.56%
.00
Notes: The t-statistic is the White’s heteroskedasticity adjusted t-statistic. The low, medium, and high market-to-book value of equity groups contain the bottom, middle, and top one-third of the observations sorted each year based on the market-to-book values of equity, respectively. The sample contains 44,073 firm-year observations spanning 1978–2002. MV is the market value four months subsequent to the fiscal year-end; BV is the stockholders’ equity; the discount rate (r) is computed as beta times risk premium of 5.5 percent plus the twelve-month treasury bill rate. The beta value is obtained from CAPM using the monthly returns for firm i from year (t – 5) to (t – 1). RE is the sum over five years of the discounted abnormal earnings plus a terminal value. Specifically, REit Σk1,5 [FEikt – ritBVi,(k–1)t](1 rit)–k [FEi6t – rit BVi5t ](rit – git )–1(1 rit )–5, where git is the growth in [FEi6t – rit BVi5t ]. FEikt is the consensus analysts’ earnings forecast k years ahead for firm i, four months after the fiscal year t; Vit is the value of equity computed as the sum of REit and BVit ; OCit is the abnormal output computed as the predicted value of sales obtained by estimating expression (4) and the predicted value of sales with asset productivities alone in expression (4) estimates (i.e., expression [8]).
medium, and low market-to-book value of equity subsamples. As noted in section 3.4, companies with high market-to-book values are more intangible intensive, and hence the contribution of OC to future abnormal earnings should be greater. The estimates are consistent with this notion: for the high market-to-book firms, the coefficient on OC is 4.67, which suggests that the contribution of organization capital to future abnormal earnings lasts for about five years, using a discount rate of 12 percent, whereas, for the medium market-to-book firms, the OC contribution to future abnormal earnings lasts for about two years (and less than a year for low marketto-book firms). We thus conclude that our measure of organization capital captures a major component of the firm’s intangibles assets, or growth potential, as reflected by the gap between the market and book (accounting) values of public companies. Since organization capital is a major contributor to future earnings, and hence OC will be partially or fully subsumed by the present value of
94
Baruch Lev and Suresh Radhakrishnan
abnormal earnings (RE), we omitted RE it from the regression in expression (12). We now introduce a widely used proxy for future earnings—financial analysts’ forecasts of earnings—into the valuation model.23 In essence, we wish to test the extent to which financial analysts, the major information intermediaries in capital markets, comprehend the value and profit implications of organization capital in their analyses and consequent earnings forecasts. Accordingly, we compute the present value of the firm’s abnormal earnings as the sum of two components: the present value of abnormal earnings in the next five years, based on explicit analysts’ forecasts of annual earnings, plus the present value of abnormal earnings from year six to infinity. The overall present value of abnormal earnings is defined thus: (13)
REit k1,5 [FEikt rit BVi(k1)t ](1 rit )k (FEi6t rit BVi5t )(rit git )1(1 rit )5,
where FEikt is the consensus analysts’ earnings forecast for firm i, k years ahead, made four months after the end of fiscal year t (to allow financial analysts access to the annual report of year t); rit is the discount rate applied to future earnings; BVit is the book value (net assets) of firm i at the end of year t(BVikt BVi,[k–1]t FEikt – DIVikt ); DIVit is the dividend the firm paid in year t; and git is the growth rate of abnormal earnings in year six (FEi6t – rit BVi5t ).24 Thus, residual earnings for each future year—the difference between analysts’ earnings forecast for that year (FE) and a charge for the cost of equity (rBV)—are predicted for each company for the next five years, followed by a terminal value, based on a constant growth expression. We estimate firm-specific discount rate, rit , using the capital asset pricing model (CAPM) with a beta (systematic risk) indicator estimated in a conventional fashion (regressing the firm’s monthly stock returns on the market return), using returns from the preceding sixty months. The discount rate is then set equal to the twelve-month Treasury bill rate (the riskless rate) plus the estimated beta multiplied by an equity risk premium of 5.5 percent.25 To examine the extent to which financial analysts incorporate elements of the organization capital estimate, OC, into their future earnings forecasts, we estimate the following two expressions: 23. Commercially available forecasts of future earnings are provided by various sources for most public companies for one, two, and sometimes three years out, as well as long-term (five years out) growth rates of earnings. We obtained analysts’ consensus (mean) earnings forecasts (for multiple forecasts per firm) and long-term growth estimates from I/B/E/S (First Call). 24. The growth rate from year five on, git , is set equal to rit minus 3 percent, if git (rit – 0.03). We used the current dividend payout ratio (dividends to earnings) to estimate expected dividends and winsorized the dividend payout ratio at 10 percent if the current dividend payout ratio exceeded 10 percent. 25. We obtain similar results when using a constant discount rate of either 10 percent or 12 percent for all firms.
The Valuation of Organization Capital
(14)
(MVit BVit) Year dummies OCit q0 q1 b1 SALEit SALEit SALEit
95
REit b2 eit , SALEit and
(MVit BVit) Year dummies REit (15) q0 q1 b2 eit , SALEit SALEit SALEit where all variables retain their previous definitions. Expression (15) provides the estimation of the valuation model without organization capital, OC, whereas expression (14) includes the organization capital. If analysts fully incorporate the contribution of organization capital to future earnings in their earnings forecasts, then OC will be subsumed in RE, and the coefficient on OC in expression (14) will be zero. Conversely, if analysts do not fully incorporate the effect of organization capital in their earnings forecasts, the coefficient on OC (b1) will be positive, and the explanatory power of expression (14) will be higher than that of expression (15). In this manner, we assess analysts’ ability to value firms’ organization capital. Table 3.4, panel D, provides estimates of expressions (14) and (15), indicating that the addition of organization capital (OC) to the valuation expression (14) substantially improves the explanatory power: from 24 percent (adjusted R2 of expression [15]) to 32 percent (adjusted R2 of expression [14]). The coefficient estimate on the flow variable OC is 2.02 and is highly statistically significant, indicating that financial analysts fail to capture a major asset (organization capital) that contributes to future profitability. Table 3.4, panel E, provides estimates of expressions (14) and (15) when the sample is partitioned into three groups of firms ranked by market-tobook value of equity. For all partitions, the OC measure improves the explanatory power of the gap between market and book values. However, OC provides the largest improvement in explanatory power for the medium and high market-to-book groups: the adjusted R2 of expression (14) is 52 percent (medium) and 55 percent (high) versus 22 percent and 32 percent for expression (15), respectively. This suggests that financial analysts, the highly influential information intermediaries in capital markets, fail to fully comprehend the value and impact of organization capital, and that this failure is particularly acute for high-growth (high market-to-book) firms. The adjusted-R2 measures of the yearly estimates of expressions (14) and (15) are presented in figure 3.2: the top line is the adjusted-R2 value of the model that includes organization capital (expression [14]), and the bottom line is for the model without organization capital (expression [15]). The figure indicates that the incremental contribution of our estimate of organi-
96
Baruch Lev and Suresh Radhakrishnan
Fig. 3.2
Explanatory power of expressions (14) and (15)
zation capital over the information in the present value of residual earnings (the horizontal difference between the lines) was positive throughout the period, but has decreased during the late 1990s. The probable reason: financial analysts, whose forecasts determine RE in expression (15), are becoming increasingly sophisticated in incorporating the benefits of organization capital in their earnings forecasts. 3.6 Concluding Remarks Organization capital is the major idiosyncratic resource that affects performance and growth of firms. However, this resource is not measured internally by companies, nor is it reported to capital markets. In this study, we develop a firm-specific measure of organization capital and estimate it for a large sample of publicly traded companies. To validate our measure, we examine the association between information technology expenditures and organization capital and find that IT is highly associated with organization capital. We then test the validity of our measure within a widely used investment valuation model and show that it contributes significantly to the explanation of market values of firms, beyond the traditional indicators of assets in place (book value) and expected abnormal earnings (growth potential). We also document that financial analysts, the major information intermediaries in capital markets, fail to fully comprehend the value of firms’ organization capital, probably because of the absence of relevant information on this resource in corporate financial reports (e.g., no data on
The Valuation of Organization Capital
97
IT expenditures, employee training, brand enhancement activities, etc.). Our measure of organization capital is, therefore, expected to improve both resource allocation within companies and investors’ valuations.
Appendix Industry Definitions 26 SIC codes 1 Consumer nondurables: food, tobacco, textiles, apparel, leather, toys
0100–0999, 2000–2399, 2700–2749, 2770–2799, 3100–3199, 3940–3989
2 Consumer durables: cars, TVs, furniture, household appliances
2500–2519, 2590–2599, 3630–3659, 3710–3711, 3714–3714, 3716–3716, 3750–3751, 3792–3792, 3900–3939, 3990–3999
3 Manufacturing: machinery, trucks, planes, office furniture, paper, commercial printing
2520–2589, 2600–2699, 2750–2769, 3000–3099, 3200–3569, 3580–3629, 3700–3709, 3712–3713, 3715–3715, 3717–3749, 3752–3791, 3793–3799, 3830–3839, 3860–3899
4 Energy, oil, gas, and coal extraction and products
1200–1399, 2900–2999
5 Chemicals and allied products
2800–2829, 2840–2899
6 Business equipment: computers, software, and electronic equipment
3570–3579, 3660–3692, 3694–3699, 3810–3829, 7370–7379
7 Telecom, telephone, and television transmission
4800–4899
8 Utilities
4900–4949
9 Wholesale, retail, and some services (laundries, repair shops)
5000–5999, 7200–7299, 7600–7699
10 Health care, medical equipment, and drugs
2830–2839, 3693–3693, 3840–3859, 8000–8099
11 Money, finance
6000–6999
12 Other: mines, construction, building materials, transportation, hotels, business services, entertainment
All other SIC codes
References Aghion, P., and P. W. Howitt. 1998. Endogenous growth theory. Cambridge: MIT Press. 26. Based on Fama and French (1988, 1997).
98
Baruch Lev and Suresh Radhakrishnan
Arrow, K. J. 1962. The economic implications of learning by doing. Review of Economic Studies 29 (3): 155–73. Atkeson, A., and P. J. Kehoe. 2002. Measuring organizational capital. NBER Working Paper no. 8722. Cambridge, MA: National Bureau of Economic Research. Becker, G. S. 1993. Human capital: A theoretical and empirical analysis, with special reference to education. Chicago: University of Chicago Press. Brown, S., K. Lo, and T. Lys. 1999. Use of R2 in accounting research: Measuring changes in value relevance over the last four decades. Journal of Accounting and Economics 28 (2): 83–116. Brynjolfsson, E., and L. Hitt. 1996. Paradox lost? Firm-level evidence on the returns to information systems spending. Management Science 42 (4): 541–58. Brynjolfsson, E., and S. Yang. 1999. The intangible benefits and costs of computer investments: Evidence from the financial markets. Massachusetts Institute of Technology, Sloan School of Management. Working Paper. Chan, L. K. C., J. Lakonishok, and T. Sougiannis. 2001. The stock market valuation of research and development expenditures. Journal of Finance 56 (6): 2431–56. Ericson, R., and A. Pakes. 1995. Markov-perfect industry dynamics: A framework for empirical work. Review of Economic Studies 61 (1): 53–82. Evenson, R. E., and L. E. Westphal. 1995. Technological change and technological strategy. In Handbook of development economics, ed. J. Behrman and T. N. Srinivasan, 2211–98. Amsterdam: North-Holland. Fama, E. F., and K. R. French. 1988. Permanent and temporary components of stock prices. Journal of Political Economy 96 (2): 246–73. ———. 1997. Industry cost of equity. Journal of Financial Economics 43 (2): 153– 94. Gordon, R. 2000. Does the “new economy” measure up to the great inventions of the past? Journal of Economic Perspectives 14 (4): 49–74. Gordon, R., and M. N. Baily. 1993. The jobless recovery: Does it signal a new era of productivity-led growth? Brookings Papers on Economic Activity, Issue no. 1: 271–316. Griliches, Z., and J. Mairesse. 2000. Production functions: The search for identification. NBER Working Paper no. 5067. Cambridge, MA: National Bureau of Economic Research. Hall, B. H. 1993. Industrial research during the 1980s: Did the rate of return fall? Brookings Papers on Economic Activity, Microeconomics: 289–330. Hall, R. E. 2000. E-capital: The link between the stock market and the labor market in the 1990s. Stanford University, Hoover Institution and Department of Economics. Working Paper. Jovanovic, B. 1979. Job matching and the theory of turnover. Journal of Political Economy 87 (5): 972–90. Lev, B., and T. Sougiannis. 1996. The capitalization, amortization, and valuerelevance of R&D. Journal of Accounting and Economics 21:107–38. McKinsey Global Institute. 2002. Learning to love recessions. The McKinsey Quarterly, no. 2:4–5. Ohlson, J. Earnings, book values, and dividends in security valuation. Contemporary Accounting Research 11 (2): 1–23. Preinreich, G. A. D. 1938. Annual survey of economic theory: The theory of depreciation. Econometrica 6 (3): 219–41. Prescott, E. C., and M. Visscher. 1980. Organization capital. Journal of Political Economy 88 (3): 366–82. Rosen, S. 1972. Learning by experience as joint production. Quarterly Journal of Economics 86 (3): 366–82.
The Valuation of Organization Capital
99
Solow, R. 1957. Technical change and the aggregate production function. Review of Economics and Statistics 39 (3): 312–20. Tomer, J. F. 1987. Organizational capital: The path to higher productivity and wellbeing. New York: Praeger.
Comment
Timothy F. Bresnahan
The goal of this paper is an important one. The authors set out to define “organizational capital,” to measure it, and to see whether capital markets properly value it in large companies. The importance arises from several conjectures that I and many others share with the authors. Few doubt that the organization of firms is among the determinants of productivity and productivity growth. Few doubt the value of a set of systematic measures in this area. Few doubt that reorganization of firms arising from use of information technology will leave customers better off and successful firms with higher market shares. And finally, few doubt that the widespread assertion of a “new economy” form of organizational capital gave capital markets an extraordinarily difficult valuation problem in the late 1990s. All these conjectures lead us toward goals like those in this paper. The paper’s methods to measure organizational capital center on production function residuals at the firm level. The authors focus on large, publicly traded firms. In the first approach, a Cobb-Douglas production function, with R&D as one of the inputs, is estimated in first differences with firm dummies. The residual is cumulated and called organizational capital. In this approach a high rate of growth of firm sales, above and beyond growth in employment, physical capital, and R&D capital, measures the rate of growth of organization capital. The firm-specific effects are modeled as moving over time. At year t, the measure of organizational capital growth comes from the model run on years t – 4 through t. The next year, the measure of organizational capital growth comes from the model run on years t – 3 to t 1, and so on. In a second approach, the firm’s sales, general, and administrative costs (SGA) are taken to be a measure of expenditures to gain organizational capital (OC). A production function more general than the Cobb-Douglas is estimated; all the Cobb-Douglas parameters are allowed to vary with SGA. The measure of OC comes from comparing predicted sales at SGA 0 to the actuals. Thus, in the second version, a productivity residual is projected onto SGA and the interaction of SGA with other inputs and called OC. Timothy F. Bresnahan is the Landau Professor of Technology in the Economy at Stanford University and a research associate of the National Bureau of Economic Research.
100
Baruch Lev and Suresh Radhakrishnan
With those two sets of estimates in hand, the paper examines several economic questions. The authors seek to document the size and growth of measured OC in the class of firms they study. They investigate whether measured OC is correlated with any of several variables associated with theories of organizational improvement. These include market share, computer investment, and administrative expenses. Finally, the authors examine the stock market valuation of measured OC using standard finance methods. They are interested in both the degree to which the stock market values companies that have high measured OC and the question of whether the stock market over- or undervalues such companies. The particular methods choices determine the empirical content of the measures of OC. Much trouble arises because it is difficult to become convinced that production function residuals at the firm level measure OC. I will begin with a discussion of what we have learned from earlier research that uses methods related to this paper. I will then turn to the main results and their interpretation. Potential worries about whether the object that has been measured might be precisely something called “organizational capital” grow as we examine the authors’ tests of organizational theories, their growth accounting, and their capital market investigations. I will go on to talk about the conceptual and data limitations that are holding back this entire field, not just this paper. Methods I: Residuals The most important modeling choice in the paper is to measure OC either as a production function residual at the firm level or as the portion of a production function residual that is explained by variation in SGA. In general, estimating production functions is not a happy task. It is very difficult to succeed in that effort either at the aggregate level or at the individual firm level. At the aggregate level, it is difficult to get the output deflators right, so that real output—the dependent variable in the production function—is always suspect. At the firm level, that particular problem is somewhat ameliorated. Firms that have some hard-to-measure competitive advantage, such as better products, will tend to have larger market shares in industry equilibrium. They will thus tend to have larger nominal sales. To the degree it comes from a superior production technology that is uncorrelated with other factors of production, production function estimates will capture it. That is the basic logic under which this paper might be right. Organization capital is likely to be reflected in superior products, especially in the service sectors. Thus, looking at a firm-level productivity residual may well capture OC if it is there. The downside of working at the firm level is that firms are highly heterogeneous. Unobserved heterogeneity in firm-level productivity or demand makes estimating the production function difficult. Even with panel data on
The Valuation of Organization Capital
101
firms, it can be hard to learn the firm-level production function, especially if one is hoping to tell apart selection, endogeneity in the inputs, and true advantages to the firm in better productivity.1 The firm-level productivity residual can measure, inter alia, unobserved demand and supply movements at the firm and industry level and the firm’s response to those shocks. This is the downside of the logic behind the measurement strategy of this paper. The firm-level production function residual may well contain the effects of OC if it is there, but it will also contain everything else about how firms are different, or in different circumstances. The fundamental idea of the paper is a relative one. Those firms with more output growth per unit input growth relative to other firms are labeled as high-measured-OC firms. On balance, I suspect firm-level analysis is likely the least bad for the purposes of this paper, which is about firms, but it is important to recall what it can and cannot do. The paper’s calculations, which make no effort to control for the most familiar measurement problems in this area, are likely suspect. Methods II: Organizations The second fundamental point to make about methods relates to the measurement of organizations. The paper does not measure firm organization. There are no measures of hierarchy or its absence; no measures of centralization or decentralization; no measures of the degree to which workers are on dynamic incentive contracts; no measures of the divisional structure of the firm; no measures of the workplace organization in productive units; no measures of corporate culture; and no measures of the role of management. In short, there are no measures suggested by the theory or empirical literatures in empirical or theoretical organizational behavior or the economics of organizations.2 The paper does use the firm’s expenditures on SGA as a predictor of the production function residual. The implicit story is that a firm that spends more money on administration (the “A” in SGA) is organizing itself better. For a number of reasons, the dollars spent by a firm on SGA have only a very distant linkage to “organizational capital” in the sense of the paper. The most direct point is that a badly organized firm may need more managers, not fewer. A separate point is that SGA includes marketing expenditures as well as managerial ones. 1. While these arguments are longstanding, perhaps the clearest sense of the trade-offs can be found in Griliches and Mairesse (1998). That paper reports an interesting effort to deal with the problem, as does one by Olley and Pakes (1996), with a nonparametric selection model. 2. A summary of the relevant theory can be found in Milgrom and Roberts (1992). Recent empirical and theoretical developments were reported at the NBER Organizational Economics Conference: see http://www.nber.org/~confer/2002/orgec02/program.html.
102
Baruch Lev and Suresh Radhakrishnan
There are also serious econometric issues associated with SGA. Of course it is possible that the SGA coefficients measure managers’ organizational contribution. To the extent that organizing the firm is costly and these costs are reflected in SGA, that would be the right interpretation. There is a problem with the implicit identifying assumption in that (most positive) case. If firms with higher SGA are more productive, are firms with lower SGA making mistakes? More serious problems arise when either SGA is not the only driver of improved organization or it is not given by some exogenous process. Suppose that there are organizational improvements that do not lead to more expenditure on SGA, such as improvements in workplace organization, streamlining bureaucracy, improvements in the quality of decisions of their communications, and so on. The paper will not measure these. The paper will, incorrectly, measure correlations between SGA and the firm residual that are not causal. If firms with looser budgets spend more on management salaries, for example, that will be measured as OC in this paper. Thus, an uptick in sales, part of which becomes management rents, is called OC. My real point is that there is only so much that can be done with production function residuals. This paper labels a production function residual as OC. Sometimes, it uses only the part of the residual that can be predicted by SGA, or SGA interacted with the factors of production. It is always using a productivity residual, however. An alternative approach would be to link outcomes at the firm level empirically to data that actually measure something about organizations. There are many approaches to that (see note 3). In recent work related to this paper, Erik Brynjolfsson, Lorin Hitt and I (Bresnahan, Brynjolfsson, and Hitt, 2002) have tested the hypothesis that investments in information technology are complementary to workplace organization. In our work, the organizational variables we use are measures of the way production workers are connected to the firm—how they are supervised, whether they work in teams, the nature of their in-house training, and so on. In my view, much of the reason to believe the conclusion that the results are specifically organizational depends on the use of these data. We are hardly alone in this. For example, Cockburn and Henderson (1998) have interesting measures of the organization of laboratories in pharmaceutical firms, related to the firms’ ability to absorb scientific knowledge and turn it into profitable products. One could go on in this vein for some time. Recent research has made some progress in measuring organizational features at the firm and industry level and relating them to economic outcomes, but there is still much work to be done. Much of the most promising work has used case study or historical methods. Statistical methods have crashed up against fundamental data limitations. This is not a problem of measuring residuals.
The Valuation of Organization Capital
103
Scholars who seek to measure organizational effectiveness using data on the structure of organizations, on incentives within the organization, on management, and so on—all the things that would be the beginnings of empirical correlates of organizational capital—do not typically use the same data used to study national economic growth. Indeed, most of the research literature in organizations goes forward without using any of the data resources of the federal statistical system at all, but is rather based in the hard work of scholars in business schools. The issue is not, I hasten to add, that the people working in the federal statistical system are unaware of the changes in the organization of firms and industries that matter for modern economic growth.3 Nor is the problem that the federal statistical system fails to understand the importance of making data resources available to researchers for purposes of generating new knowledge. Like me, they recognize the value of a thriving empirical economic research program for future economic policymaking. The issue is simply one of resources. Results While the paper has a number of subsidiary results, the main results are as follows: 1. Measured organizational capital is large and makes a substantial contribution to the growth of firm output. The size of measured OC is different in two methods used by the authors, but it is in either case a substantial fraction of physical or R&D capital for the firms they study. 2. Measured OC is persistent at the firm level. The paper reports the persistence of its measures of OC and finds that it is high. The paper also examines the underlying production function residuals themselves, finding them to be highly persistent. Thus, the persistence of measured OC is not an artifact of the particular time series process assumed by the authors. 3. Measured OC is correlated in the cross section with a. Firm market share in its two-digit SIC industry b. Firm expenditures on information technology c. SGA (for the first OC measure) 4. Value in the stock market at the moment the authors’ OC measure could be calculated. 5. Stock market gains after their OC measure could be calculated. 3. The federal statistical system has recently undertaken a very wise overhaul of the way it classifies establishments, of which an important part is recognizing some organizational issues.
104
Baruch Lev and Suresh Radhakrishnan
These last two results come from a standard financial markets valuation regression. The finding that OC predicts firm value is in a regression in which several other assets and analysts’ forecasts are held fixed. Interpretation of Results The authors have made the tables, and any discussion naturally begins with their interpretation. It should not end there, however, as many of the substantive interpretations we find in this paper involve further unverified assumptions. Findings 1 and 2, substantial measured organizational capital and substantially persistent measured OC, seem to me to be better labeled “assumptions” than “results.” The paper reports that the contribution of organizational capital to the growth of sales is large. Since OC was measured as a residual (or a residual projected onto SGA) from a first-differenced production function, the contribution of OC to growth in output is an identity, not a result. Similarly, the finding that measured OC is persistent at the firm level is the finding that a firm production function residual is persistent at the firm level. That well-known fact establishes that firms are different in ways that move slowly, not necessarily that the reason for that is OC. The three subfindings under result 3 are interpreted by the authors as evidence that their economic interpretation is right. I am less convinced. Findings 3a, 3b, and 3c establish that the production function residual is correlated with market share, IT budgets, and SGA. I have already talked about the SGA correlation. The interpretation the authors offer for the first two correlations is that improvements in OC improve firm competitive position and that OC and IT are complements. I agree with both of those economic conclusions but do not see any evidence for them in this paper. The finding about market share seems to me to be simply a restatement of what the paper does. Positive production function residuals arise when a firm’s sales rise more rapidly than its inputs. A firm will have a larger value of those residuals when, relative to other firms, its sales/input is rising rapidly. That could happen because the firm has grown more productive through a positive improvement in its organization, the authors’ preferred interpretation, or because there has been an outward shock to the firm’s demand for any reason, which would cause output and measured productivity to rise. There is a substantial literature that attempts to sort out these two different flows of causation in order to identify the part of the firm-level productivity residual that is in fact a shock to the production function. This paper interprets the entire correlation as productivity. The finding about IT is one of a long series of findings that IT budgets are correlated with success at the firm level. The work of Erik Brynjolfsson, alone and with collaborators, has established this fairly clearly. In work that I did with Erik and Lorin Hitt, cited above, the correlation is very care-
The Valuation of Organization Capital
105
fully unpacked to learn how much of it is related to improvements in organization that lead to productivity improvements and how much reflects other flows of causation. There is no similar effort in this paper. In short, I agree that improvements in organization at the firm level, often in recent years connected to IT, are an important area for productivity growth. What I don’t see in this paper is an attractive measure of that or any evidence for it. The paper’s complaint that “there exist no operational measures of firms’ organizational capital” is still right. Capital Markets The paper goes on to show that high-measured-OC firms are (finding 4) more valuable in the stock market and (finding 5) get predictable stockmarket gains. The authors conclude that investors reward firms that have invested in OC before the returns on that capital are visible (4) but undervalue this capital (5). The finding that high-measured-OC firms are more valuable in the contemporary stock market (4) follows from the measurements. Any economic interpretation of measured OC would likely lead to finding 4—including the authors’ interpretation, but also including any other. Finding 5 is more interesting. Whenever stock market returns can be predicted in a regression, there is something to pay attention to. The undervaluation conclusion follows from the empirical finding that returns in stock prices can be econometrically predicted using the measure of OC.4 I am dubious about the economic conclusion. My doubt has its roots in two arguments, one about the sample period and the other about measured OC. The sample period begins in the 1990s (for this exercise) and ends in 2000. That is a period of surprising sustained macroeconomic expansion. The capital markets appeared to base valuation of companies in this era at least in part on the view that the aggregate economy would continue to have surprising growth, and so that growth companies would continue to have good prospects. This is not an idea that was quite so important in the capital markets either earlier or, as we now know, later. In general, this was a good period in which to have been holding equities in firms that grew with or faster than the aggregate economy. A bull market, in short. The authors’ measures of OC are measures of relative company growth. When company sales per unit input rise, measured OC will rise. So the evidence in this paper that there is a “misevaluation of organizational capital on the stock market” (or the need for a new risk factor) is this. Holding a portfolio of growth companies during the 1990s would have led 4. An alternative interpretation the authors offer is that this finding might point us toward “a new risk factor associated with organizational capital.” I will treat that as part of their main interpretation.
106
Baruch Lev and Suresh Radhakrishnan
to extraordinary gains. It seems to me that there is a long step from that evidence to the conclusion that the capital markets undervalue “organizational capital.” Those remarks merely report what I hope is standard applied finance common sense. You could “show” by these methods that many different assets were “undervalued” in the 1990s. U.S. equities in general, for example, could be “shown” to be undervalued in that period by the methods used in this paper. Anyone who held U.S. equities through that time period earned good returns. The problem is, you are looking at a long bull market. You always do well buying stocks before a bull market. You also will always do particularly well buying high-measured-OC stocks before a bull market. There are many, many plausible ways in which the U.S. stock market was getting valuation wrong in the 1990s. A bubble in which all equities were overvalued is one plausible way, and a bubble in which growth stocks were overvalued is another plausible way. I think that these authors have some more work to do before they make a convincing case that U.S. equities markets were seriously undervaluing growth companies in the 1990s. Two Industries’ and Four Firms’ “OC” The paper names about a dozen leading firms in several industries as widely cited examples. It attributes the sustained success of these firms to OC, conceptualized as a factor of production unique to these firms within their industries. I know two of the industries from which these firms are drawn, motor vehicles and computing, well enough to use them to consider the broader issues raised by the firm-level OC approach. In automobiles, there is cross-section variation in the effectiveness of firms in managing design, production, and distribution processes. One important innovation here is undoubtedly organizational, at least in part. I refer to the Toyoda system (sometimes called Kanban or “lean production”) of manufacturing automobiles. The authors link this organizational to Toyota Motor Company’s current market position. That is largely right, but to understand the broader economic issues related to that calls for adding two pieces of context. First, half a century ago when Eiji Toyoda, Ed Deming, and (especially) Taiichi Ohno were working on this organizational improvement, Toyota Motor Company was not the automobile market leader. Back then, General Motors (GM) was the acknowledged leader in “organizational capital” in that industry, based on the (also undoubtedly organizational, at least in part) innovations of Alfred P. Sloan at GM.5 Dramatic improvements in “organizational capital” may create huge gains to society, as they did in this case—twice. General Motors created a new form of organization to leapfrog market leader Ford; Toyota in turn created a new form of organization to leapfrog GM. 5. See, e.g., Womack, Jones, and Roos (1991).
The Valuation of Organization Capital
107
The point is the basic one of creative destruction. Society’s stock of OC does not necessarily come from existing dominant firms, but instead from the dynamic incentives of entrepreneurial outsiders. Society has an interest in the organization of markets as well as of firms. Markets in which creative destruction can play out create new OC—by destroying the economic value of old OC. Second, measuring the stock of OC by comparing leading firms to others is a dangerous business. Successful imitation by other automobile firms lowers Toyota’s measured OC but not its actual OC. It increases the social stock of OC. To the extent that prices fall from successful imitation and competition, the imitating firms do not have any measured OC. Similarly, GM’s measured OC fell as other firms (Ford, Chrysler, and the ancestors of American Motors) imitated its organization, and fell further when new competitive organizations like Toyota entered. Again, the basic conceptual point comes from the economics of competitive markets. There is no simple and direct relationship between firm performance and industry performance. Society’s stock of OC cannot be calculated by adding up the stocks of firm OC as measured in this paper. The authors bring up two examples of firms from the computer industry where they see “organizational capital” as the source of sustained leading positions: IBM and Microsoft. This is partly right, and it raises a new issue. Both of those firms originally built leading positions in that industry through superior performance.6 IBM’s superior performance clearly had organizational roots, as Tom Watson, Jr., built a firm that could both invent and market large-scale computers. Microsoft’s early superior performance was more related to the brilliance of the entrepreneurs who founded it, Paul Allen and (especially) Bill Gates. The long-term success of both of those firms has other causes, however. Each had a long run as the dominant firm in its industry (Microsoft’s continues today). The idea that something you might call “organizational capital” sustained either firm in its long period of industry dominance is false. It was not superior performance that kept IBM in a position of leadership through the 1970s and 1980s, nor is it superior performance that keeps Microsoft in a position of leadership today. Long after its organization had become a liability rather than an asset, IBM held on to market dominance in the mainframe computer industry. The advantages that sustained it were positional, not organizational. Years later, when a long, slow process finally removed IBM from its prior position, the importance of competition and ideas from outside was universally acknowledged, even from ex-IBMers. While slow, the process of creative destruction ultimately worked. In the present, Microsoft’s dominant position in the personal computer (PC) industry does not arise from superior organization. When Internet 6. See Bresnahan and Malerba (1999) for more detail.
108
Baruch Lev and Suresh Radhakrishnan
entrepreneurs came along, they outinvented and outmarketed Microsoft—just as the young Microsoft outperformed IBM in the early days of the PC. Microsoft continued its dominance by blocking distribution of competitively threatening technologies.7 Where the fall of IBM is an example of successful creative destruction, Microsoft’s ability to evade competition is an example of blocked creative destruction. The broad point is that “what’s good for GM is good for the country” is nonsense. What’s good for the continued growth of the United States and of the world economy is the competitive system that permits creative destruction. Current public policy in the United States has permitted Microsoft to evade even the threat of creative destruction from new competitors born on the Internet. Current public policy is leaning strongly toward putting the “baby bells” back in the position AT&T once enjoyed. When public policy faces decisions like that, it is important to remember the distinction between the economy’s stock of OC, which includes dynamic competition, and leading firms’ stock of measured OC, which is larger when there is less dynamic competition. The paper by Lev and Radhakrishnan focuses on productivity growth in large, existing firms, especially ones that are relatively successful in their industry. The paper makes an argument in favor of focusing exclusively on that topic in the study of productivity growth in the whole economy. The exclusivity is a mistake; the error made by this paper in arguing for exclusivity is a form of an increasingly common error. Productivity, Economic Growth, and Organization Is individual firm OC the growth pole of the U.S. economy? Is sustained firm-level success caused by superior organization pervasive in the economy, observed in almost every industry? Is firm-level OC “the major source of competitive advantage” that we should be careful to value? Is the central locus of this OC in large, established, successful firms? With or without the “organization capital” label, the view that all these questions should be answered “yes” is growing more common in discussion of economic policy formation. That view makes an error at two levels. The view omits most of the forces that matter to productivity growth at the economywide level. It leaves out fundamental advances in science and engineering, for example, and their later conversion into commercially valuable products and processes. It leaves out firm growth, birth, and death, and the importance of market selection in productivity growth. Perhaps most important, it leaves out entrepreneurship and the possibility of creative destruction. At a second level, that view makes an important conceptual error. Around 1970, the world looked to firms like IBM, GM, and AT&T as ex7. While these claims are controversial, they should not be. They are what the Microsoft documents showed and what the courts, both in trial and on appeal, have found.
The Valuation of Organization Capital
109
emplars of excellence in organization. Public policy listened to the idea that they were an excellent form of organization, as did business policy. Later, in the 1980s, many studies of productivity growth looked to international comparisons. The higher rate of productivity growth in Japan than in the United States led to the advice that business and public policy should emulate Japanese keiretsu. My point is that closely linking analysis at the firm level to analysis at the national or world economic growth level is a common mistake. Indeed, the single most common usage of the phrase “new economy” that forms part of the title of this conference was “new economy company.” That meant something very particular in the firm valuation theory that drove the stock market bubble of the late 1990s. It meant that firm-level organizational capital of a new form was the engine of economic growth. A particular form of startup was supposed to be the key exemplar of superior performance. And now we are returning to the view that large-scale successful enterprises have valuable lessons for public policy and business policy—the view of this paper and much other recent analysis. Like the earlier version of that view (1970) or the view that keiretsu were the way to go (1980s) or that overcaffeinated startups had all the organizational capital (1990s), this new view is shortsighted. The error in all four analyses is the same. The error is not the specific one that we paid “too much attention” to large established firms in the 1960s or today or that we paid “too much attention” to startups in the late 1990s or “too much attention” to Japan in the 1980s. The mistake lies in thinking that it is a sensible approach to productivity growth analysis to have a system where we “pay attention” in this sense to anything at all. The view in which scholarship has the job of figuring out what kind of firm organization is a good kind, and business and public policy have the job of “paying attention” to the scholarship and thus to the form of firm, is dangerous nonsense. It reduces economics to a kind of second-rate industrial engineering, and injects an element of central planning into policy formation. Ignoring markets is a serious oversight, both conceptually and for measurement. While organization at the firm level is economically important, focusing too closely on success at the level of the firm ignores much of what is important for growth in a market economy. Measures of firm success that are based in comparison to other firms must take market competition into account.
References Baily, Martin Neil, Charles Hulten, and David Campbell. 1992. Productivity dynamics in manufacturing plants. Brookings Papers on Economic Activity, Microeconomics: 187–249. Bresnahan, Timothy, Erik Brynjolfsson, and Lorin Hitt. 2002. Information tech-
110
Baruch Lev and Suresh Radhakrishnan
nology, workplace organization, and the demand for skilled labor: Firm level evidence. Quarterly Journal of Economics 117 (1): 339–76. Bresnahan, Timothy, and Franco Malerba. 1999. Industrial dynamics and the evolution of firms’ and nations’ competitive capabilities in the world computer industry. In Sources of industrial leadership: Studies of seven industries, ed. David C. Mowery and Richard R. Nelson, 79–132. Cambridge: Cambridge University Press. Cockburn, Iain, and Rebecca Henderson. 1998. Absorptive capacity, coauthoring behavior, and the organization of research in drug discovery. Journal of Industrial Economics 46 (2): 157–82. Griliches, Z., and J. Mairesse. 1998. Production functions: The search for identification. In Econometrics and economic theory in the twentieth century: The Ragnar Fisch centennial symposium, ed. Steinar Strøm, 169–203. Cambridge: Cambridge University Press. Milgrom, Paul, and John Roberts. 1992. Economics, organization, and management. Englewood Cliffs, NJ: Prentice Hall. Olley, Steve, and Ariel Pakes. 1996. The dynamics of productivity in the telecommunications equipment industry. Econometrica 64 (6): 1263–98. Womack, James P., and Daniel T. Jones, and Daniel Roos. 1991. The machine that changed the world: The story of lean production. New York: Harper Collins.
4 Intangible Risk Lars Peter Hansen, John C. Heaton, and Nan Li
4.1 Introduction Accounting for the asset values by measured physical capital and other inputs arguably omits intangible sources of capital. This intangible or unmeasured component of the capital stock may result because some investments from accounting flow measures are not eventually embodied in the physical capital stock. Instead there may be scope for valuing ownership of a technology, for productivity enhancements induced by research and development, for firm-specific human capital, or for organizational capital. For an econometrician, intangible capital becomes a residual needed to account for values. In contrast to measurement error, omitted information, or even model approximation error, this residual seems most fruitfully captured by an explicit economic model. It is conceived as an input into technology whose magnitude is not directly observed. Its importance is sometimes based on computing a residual contribution to production after all other measured inputs are accounted for. Alternatively it is inferred by comparing asset values from security market data to values of physical measures of firm or market capital. Asset market data is often an important ingredient in the measurement of intangible capital. Asset returns are used Lars Peter Hansen is the Homer J. Livingston Distinguished Service Professor of Economics at the University of Chicago and a research associate of the National Bureau of Economic Research. John C. Heaton is the James H. Lorie Professor of Finance in the Graduate School of Business, University of Chicago, and a research associate of the National Bureau of Economic Research. Nan Li is an assistant professor in the Department of Finance and Accounting at the National University of Singapore. Conversations with Fernando Alvarez, Ravi Bansal, Susanto Basu, and Jim Heckman were valuable in completing this paper. Hansen and Heaton gratefully acknowledge support from the National Science Foundation and Li from the Olin Foundation.
111
112
Lars Peter Hansen, John C. Heaton, and Nan Li
to convey information about the marginal product of capital and asset values are used to infer magnitude of intangible capital. In the absence of uncertainty, appropriately constructed investment returns should be equated. With an omitted capital input, constructed investment returns across firms, sectors, or enterprises will be heterogeneous because of mis-measurement. As argued by Telser (1988) and many others, differences in measured physical returns may be “explained by the omission of certain components of their ‘true’ capital.” McGrattan and Prescott (2000) and Atkeson and Kehoe (2002) are recent macroeconomic examples of this approach. Similarly, as emphasized by Hall (remarks in this volume) and Lev and Radhakrishnam (chap. 3 in this volume), asset values should encode the values of both tangible and of intangible capital. Thus, provided that physical capital can be measured directly, security market data should convey information about the intangible component of the capital stock. This rationale provides the basis for the cross-sectional approaches to measurement discussed by Brynjolfsson and Hitt (2004) and references therein. Following Hall (2001), we find it fruitful to consider the impact of risk in the measurement of intangible capital. Although not emphasized by Hall, there is well documented heterogeneity in the returns to equity of different types. In the presence of uncertainty, it is well known that use of a benchmark asset return must be accompanied by a risk adjustment. Historical averages of equity returns suggest that the riskiness of tangible investment differs systematically from the riskiness of intangible investment. Asset return and price heterogeneity can be caused not only by differences in the marginal products but also by differences in the relative values of capital across types. Even under the presumption that markets value risk correctly, inferences about the intangible capital stock using security market prices must necessarily confront these secular movements in relative valuation. Unfortunately, a recurring problem in financial economics is the construction of a valuation model that is consistent with observed return heterogeneity. In section 4.2 we develop a model of investment and examine alternative assumptions used in the literature to make inferences about the intangible component of capital based on market valuations. We show that these assumptions typically impose a restriction on asset returns and/or relative prices. In section 4.3 we review and reproduce some of the findings in the asset pricing literature by Fama and French (1992) on return heterogeneity that may be linked to differences in intangible investment across firms. Risk premia can be characterized in terms of return risk or dividend or cash flow risk. We follow some recent literature in finance by exploring dividend risk. Since equity ownership of securities entitles an investor to future claims to dividends in all subsequent time periods, quantifying divi-
Intangible Risk
113
dend risk requires a time series process. We consider measurements of dividend risk using vector autoregressive (VAR) characterizations. Since asset valuation entails the study of a present-value relation, long-run growth components of dividends can play an important role in determining asset values. In section 4.4 we reproduce the present-value approximation used in the asset pricing literature and use it to define a long-run measure of risk as a discounted impulse response. In sections 4.5 and 4.6 we use VAR methods to estimate the dividend-risk measures that have been advocated in the asset-pricing literature. The literatures on intangible capital and asset return heterogeneity to date have been largely distinct. Our disparate discussion of these literatures will inherit some of this separation. In section 4.7 we conclude with discussion of how to understand better lessons from asset pricing for the measurement of intangible capital. 4.2 Adjustment Cost Model We begin with a discussion of adjustment costs and physical returns. Grunfeld (1960) shows how the market value of a firm is valuable in the explanation of corporate investment. Lucas and Prescott (1971) developed this point more fully by producing an equilibrium model of investment under uncertainty. Hayashi (1982) emphasized the simplicity that comes with assuming constant returns to scale. We exploit this simplicity in our development that follows. Consider the following setup. 4.2.1 Production Let nt denote a variable input into production such as labor, and suppose there are two types of capital, namely kt (k mt , k ut ) where k mt is the measured capital and kut is unmeasured or intangible capital stock. Firm production is given by f(kt , nt , zt ), where f displays constant returns to scale in the vector of capital stocks and the labor input nt . The random variable zt is a technology shock at date t. Following the adjustment cost literature, there is a nonlinear evolution for how investment is converted into capital. (1)
kt1 g(it , kt , xt ),
where g is a two-dimensional function displaying constant returns to scale in investment and capital and xt is a specific shock to the investment technology. We assume that there are two components of investment corresponding to the two types of capital. This technology may be separable, in
114
Lars Peter Hansen, John C. Heaton, and Nan Li
which case the first coordinate of g depends only on i mt and k mt while the second coordinate depends only on i ut and k ut . E 1. A typical example of the first equation in system (1) is
itm k mt1 (1 m )k mt i mt gm , x k m, ktm t t where m is the depreciation rate and gm measures the investment lost in making new capital productive. In the absence of adjustment costs, the function g is linear and separable. E 2. A common specification that abstracts from adjustment costs is g(kt , it , xt )
1m
0
0
1 u
k i . t
t
4.2.2 Firm Value Each time period the firm purchases investment goods and produces. Let pt denote the vector of investment good prices and wt the wage rate. Output is the numeraire in each date. The date-zero firm value is E
∑S
t,0
[ f(kt , nt , zt ) pt it wt nt ]F0 .
t0
The firm uses market-determined stochastic discount factors to value cash flows. Thus, St,0 discounts the date-t cash flow back to date zero. This discount factor is stochastic and varies depending on the realized state of the world at date t. As a consequence St,0 not only discounts known cash flows but adjusts for risk; see Harrison and Kreps (1979) and Hansen and Richard (1987).1 The notation F0 denotes the information available to the firm at date zero. Form the Lagrangian
E
∑S
t,0
[ f(kt , nt , zt ) pt it wt nt t [kt1 g(it , kt, xt )]F0 ,
t0
where k0 is a given initial condition for the capital stock. First-order conditions give rise to empirical relations and valuation relations that have been used previously. Consider the first-order conditions for investment: (2)
∂g pt (it , kt , xt ) t . ∂i
1. This depiction of valuation can be thought of assigning state prices, but it also permits certain forms of market incompleteness.
Intangible Risk
115
Special cases of this relation give rise to the so-called q theory of investment. Consider, for instance, the separable specification in Example 1. Then
∂gm itm ptm x 1 . t ∂i ktm tm
(3)
This relates the investment capital ratio to what is called Tobin’s q(qt mt / p mt ). The Lagrange multiplier mt is the date-t shadow value of the measured capital stock that is productive at date t. There is an extensive empirical literature that has used equation (3) to study the determinants of investment. As is well known, mt p mt and Tobin’s q is equal to one in the absence of adjustment costs as in Example 2. Consider next the first-order condition for capital at date t 1:
∂f ∂g t E St1,t (kt1, nt1, zt1) (it1, kt1, xt1) t1 Ft , ∂k ∂k where St1,t St1,0 /St,0 is the implied one-period stochastic discount factor between dates t and t 1. This depiction of the first-order conditions is in the form of a one-period pricing relation. As a consequence, the implied returns to investments in the capital goods are ∂f ∂g (k , k , x ) t1, nt1, zt1) (i m ∂k ∂ k m t1 t1 t1 t1 m r t1 mt ∂f ∂g (k , k , x ) t1, nt1, zt1) (i u ∂k ∂k u t1 t1 t1 t1 u r t1 . ut The denominators of these shadow returns are the marginal costs to investing an additional unit capital at date t. The numerators are the corresponding marginal benefits reflected in the marginal product of capital and the marginal contribution to productive capital in future time periods. The shadow returns are model-based constructs and are not necessarily the same as the market returns to stock or bond holders. In the separable case (example 1), the return to the measurable component of capital is ∂f (k , n , z )k m mt1k mt2 p mt1i mt1 ∂k m t1 t1 t1 t1 m r t1 . mt k mt1 An alternative depiction can be obtained by using the investment first-order conditions to substitute for mt and mt1 as in Cochrane (1991b). In the absence of adjustment costs (example 2), the return to tangible capital is
116
(4)
Lars Peter Hansen, John C. Heaton, and Nan Li
∂f m (k t1, nt1, zt1) (1 m )p t1 m ∂ k r mt1 . p mt
The standard stochastic growth model is known to produce too little variability in physical returns relative to security market counterparts. In the one-sector version, the relative price p mt becomes unity. As can be seen in equation (4), the only source of variability is the marginal product of capital. Inducing variability in this term by variability in the technology shock process zt1 generates aggregate quantities such as output and consumption that are too variable. The supply of capital is less elastic when adjustment costs exist; hence, models with adjustment costs can deliver larger return variability than the standard stochastic growth model. This motivated Cochrane (1991b) and Jermann (1998) to include adjustment costs to physical capital in their attempts to generate interesting asset market implications in models of aggregate fluctuations. As an alternative, Boldrin, Christiano, and Fisher (2001) study a two-sector model with limited mobility of capital across technologies. In our environment, limited mobility between physical and intangible capital could be an alternative source of aggregate return variability. When we restrict the technology to be constant returns to scale, the time zero firm value is (5)
f(k0, n0, z0 ) i 0 p0 w0 n0 k1 0
∂f ∂g k0 (k0, n0, z0 ) (i0, k0, x0 ) 0 . ∂k ∂k This relation is replicated over time. Thus the date-t firm value is given by the cash flow (profit) plus the ex-dividend price of the firm. Equivalently, it is the value of the date-zero vector of capital stocks taking account of the marginal contribution of this capital to the production of output and to capital in subsequent time periods. Thus, asset market values can be used to impute kt1 t after adjusting for firm cash flow. When the firm has unmeasured intangible capital, this additional capital is reflected in the asset valuation of the firm. Even if the tangible component of the capital (k mt1) can be measured, variation in the relative price of intangible capital makes it difficult to infer a physical measure of k ut1. The presence of intangible capital alters how we interpret Tobin’s q. In effect, there are now multiple components to the capital stock. Tobin’s q is typically measured as a ratio of values and not as a simple ratio of prices. While the market value of a firm has both contributions, a replacement value constructed by multiplying the price of new investment goods by the
Intangible Risk
117
measured capital stock will no longer be a simple price ratio. Instead, we would construct t kt1 . m ptmkt 1 Heterogeneity in q across firms or groups of firms reflects in part different amounts of intangible capital, not simply a price signal to convey the profitability of investment. The dynamics of the ex-dividend price of the firm are given by t kt1 E{St1,t [ f(kt1, nt1, zt1) pt1 it1 wt1nt1 kt2 t1]Ft }. The composite return to the firm is thus (6)
f(kt1, nt1, zt1) pt1 it1 wt1nt1 kt2 t1 r ct1 t kt1 mt k mt1r mt1 ut kut1r ut1 . t kt1
Recall that kt1 is determined at date t (but not its productivity) under our timing convention. The composite return is a weighted average of the returns to the two types of capital with weights given by the relative values of the two capital stocks. Firm ownership includes both bond and stock holders. The market counterpart to the composite return is a weighted average of the returns to the bond holders and equity holders with portfolio weights dictated by the amount of debt and equity of the firm. 4.2.3 Imputing the Intangible Capital Stock These valuation formulas have been used by others to make inferences about the intangible capital stock. First we consider a return-based approach. We then consider a second approach based on asset values. Following Atkeson and Kehoe (2002) and others, we exploit the homogeneity of the production function and Euler’s Theorem to write ∂f ∂f m yt1 (k t1, nt1, zt1)k t1 (kt1, nt1, zt1)nt1 m ∂k ∂n ∂f (k , n , z )k u , ∂k u t1 t1 t1 t1 where yt1 f (kt1, nt1, zt1) is output. Thus, the contribution of intangible capital to output is measured by
118
Lars Peter Hansen, John C. Heaton, and Nan Li
∂f ∂f (k , n , z )k u (k , n , z )k m ∂ku t1 t1 t1 t1 ∂km t1 t1 t1 t1 1 yt1 yt1 ∂f (kt1, nt1, zt1)nt1 ∂n . yt1 To make this operational we require a measure of the labor share of output given by compensation data and a measure of the share of output attributed to measured component of capital. Using formula (4) from example 2 and knowledge of the return and the depreciation rate, we can construct ∂f (k ,n ,z ) m ∂km t1 t1 t1 pt 1 m r (1 ) . t1 m p mt ptm Thus,
(7)
∂f (k , n , z )k m m ∂km t1 t1 t1 t1 pt ptmk mt1 1 m r t1 (1 m ) . m yt1 pt yt1
This formula avoids the need to directly measure rental income to measured capital, but it instead requires measures of the physical return, physical depreciation scaled by value appreciation, and the relative value of tangible capital to income. The physical return to measured capital is not directly observed. Even if we observed the firm’s (or industry’s or aggregate) return from security markets, this would be the composite return in equation (6) and would include the contribution to intangible capital. As a result, a time series of return data from security markets is not directly usable. Instead Atkeson and Kehoe (2002) take a steady-state approximation, implying that returns should be equated to measure the importance of intangible capital in manufacturing. Income shares and price appreciation are measured using time series averages. Given the observed heterogeneity in average returns, as elsewhere in empirical studies based on the deterministic growth model, there is considerable ambiguity as to which average return to use. To their credit, Atkeson and Kehoe document the sensitivity of their intangible capital measure to the assumed magnitude of the return.2 We will have more to say about return heterogeneity subsequently. 2. Atkeson and Kehoe (2002) are more ambitious than what we describe. They consider some tax implications and two forms of measured capital: equipment and structures. Primarily they develop and apply an interesting and tractable model of organizational capital.
Intangible Risk
119
To infer the value of the intangible capital relative to output using return data, we combine equation (7) with its counterpart for intangible capital to deduce that
(8)
∂f (kt1, nt1, zt1)nt1 m pt kt1 ∂n pt 1 1 r ct1 (1 m) yt1 yt1 ptm
p pk (1 ) . p y u t1
u
u t
u t
ptmk mt1 yt1
u t1
t1
To use this relation we must use not only the return r ct1 but also the growth rate in the investment prices for the two forms of capital and the depreciation rates. From this we may produce a measure of p ut k ut1/yt1 using equation (8). McGrattan and Prescott (2000) use a similar method along with steady-state calculations and a model in which p ut p mt 1 to infer the intangible capital stock.3 Instead of using security market returns or historical averages of these returns, they construct physical returns, presuming that the noncorporate sector does not use intangible capital in production.4 Rather than making this seemingly hard-to-defend restriction, one could link the return r ct1 directly to asset returns, as in Atkeson and Kehoe (2002). However, the practical question of which security market return to use would still be present.5 In contrast to Atkeson and Kehoe (2002) and McGrattan and Prescott (2000, 2003), uncertainty is central in the analysis of Hall (2001). For simplicity, Hall considers the case in which there is in effect a single capital stock and a single investment good, but only part of capital is measured. Equivalently, the capital stocks k mt and k ut are perfect substitutes. Thus, the production function is given by yt f a (kta, nt , zt ), where k at k mt k ut . Capital evolves according to (9)
k at1 g a (k at , i at ),
3. McGrattan and Prescott (2000) also introduce tax distortions and a noncorporate sector. They also consider uncertainty, but with little gain. They use a minor variant of the standard stochastic growth model, and that model is known to produce physical returns with little variability. 4. McGrattan and Prescott (2003) use an a priori restriction on preferences instead of the explicit link to returns in the noncorporate sector, but this requires independent information on the preference parameters. 5. The measurement problem is made simpler by the fact that it is the composite return that needs to be computed and not the individual return on measured capital. The implied oneperiod returns to equity and bond holders can be combined as in Hall (2001), but computing the appropriate one-period returns for bond holders can be problematic.
120
Lars Peter Hansen, John C. Heaton, and Nan Li
with xt excluded. The first-order conditions for investment are now given by (1)
pta ∂g a (k at , i at ) a , t ∂i
and (11)
ta k at1 vt pta
is measured from the security markets using the firm value relation (5) and taking investment to be numeraire. For a given k at , relations (9), (10), and (11) are three equations in the three unknowns at /p at , k at1, i at . In effect they provide a recursion that can be iterated over time with the input of firm market value vt . Instead of returns, Hall (2001) uses asset values to deduce a time series for the aggregate capital stock and the corresponding shadow valuation of that stock.6 While Hall (2001) applies this method to estimate a time series of aggregate capital stocks, we will consider some evidence from empirical finance on return heterogeneity that indicates important differences between returns to the tangible and intangible components of the capital stocks. This suggests the consideration of models in which intangible capital differs from tangible capital in ways that might have important consequences for measurement. This includes models that are outside the adjustment cost models described here. 4.3 Evidence for Return Heterogeneity We now revisit and reconstruct results from the asset-pricing literature. Since the work of Fama and French (1992) and others, average returns to portfolios formed on the basis of the ratio of book value to market value are constructed. While the book-to-market value is reminiscent of the q measure of the ratio of the market value of a firm vis-à-vis the replacement cost of its capital, here the book-to-market value is computed using only the equity holders’ stake in the firm. Capital held by bond holdings is omitted from the analysis. Recall from section 4.2 that intangible capital is reflected in only the market measure of assets but is omitted from the book measure. We are identifying firms with high intangible capital based on a high ratio of book equity to market equity (BE-ME). It is difficult to check this identification directly because the market value of debt at the firm level is not easily observed. As a check on our interpretation of the portfolios as reflecting 6. Hall (2001) establishes the stability of this mapping for some adjustment cost specifications, guaranteeing that the impact of initializing k a0 of the recursion at some arbitrary level decays over time.
Intangible Risk
121
different levels of intangible capital we examined whether our portfolio construction would be different if we included debt. We used the book value of debt as an approximation to the market value and considered rankings of firms based on book assets to market assets. This resulted in essentially the same rankings of firms. In fact, the rank correlation between book assets to market assets and BE-ME averaged 0.97 over the fifty-three years of our sample. This gives us confidence in identifying the high BEME portfolio as containing firms with low levels of intangible and the low BE-ME portfolio as containing firms with high levels of intangibles. Fama and French form portfolios based on the BE-ME ratio and estimate the mean return of these groups. They find that low-BE-ME groups have low average returns. Fama and French (1992) view a low BE-ME as signaling sustained high earnings and/or low risk. While we follow Fama and French (1992) in constructing portfolios ranked by BE-ME ratios, we use a coarser sort than they do. We focus on five portfolios instead of ten, but this does not change the overall nature of their findings. Each year listed firms are ranked by their BE-ME using information from COMPUSTAT. Firms are then allocated into five portfolios, and this allocation is held fixed over the following year. The weight placed on a firm in a portfolio is proportional to its market value each month.7 Firms may change groups over time, and the value weights are adjusted accordingly. In effect, the BE-ME categories are used to form five portfolio dividends, returns, and values each time period. This grouping is of course different in nature from the grouping of firms by industry standard industrial classification (SIC) codes, an approach commonly used in the industrial organization (IO) literature. For instance, firms in the low-BE-ME category may come from different industries, and the composition may change through time. On the other hand, this portfolio formation does successfully identify interesting payout heterogeneity at the firm level, as we demonstrate below. Figure 4.1 plots the market value relative to book value of five portfolios of U.S. stocks over the period 1947–2001. Notice that there is substantial heterogeneity in the market value relative to book value of these portfolios. This potentially reflects substantial differences in intangible capital held by the firms that make up the portfolios. Further, the value of market equity to book equity fluctuates dramatically over time. These fluctuations can reflect changes in the relative composition of the capital stock between tangible and intangible capital. They may also reflect changes in the relative valuation of the two types of capital. Changes in valuation reflect changes in conjectured productivity of the different types of capital but may also reflect changes in how the riskiness is perceived and valued by investors. 7. See Fama and French (1992) for a more complete description of portfolio construction.
122
Fig. 4.1
Lars Peter Hansen, John C. Heaton, and Nan Li
Market-to-book value of equity for five portfolios of stocks
Table 4.1 presents sample statistics for these portfolios of stocks. For comparison, the column labeled “Market” gives statistics for the Center for Research in Securities Prices (CRSP) value-weighted portfolio. Consistent with figure 4.1, there are substantial differences in the average value of BE-ME for these portfolios. Notice that the portfolios with lower BEME (high market value relative to book value of equity) are also the ones with the highest level of research and development (R&D) relative to sales. This is consistent with the idea that large R&D expenditures will ultimately generate high cash flows in the future, thus justifying high current market values. Also, the high level of R&D by firms with high market valuation relative to book value may reflect substantial investment in intangibles. While the five BE-ME portfolios are likely to have different compositions of capital, these portfolios also imply different risk-return trade-offs. As in Fama and French (1992), the low-BE-ME portfolios have lower mean returns but not substantially different volatility than high-BE-ME portfolios. The mean returns differ, and the means of implied excess returns scaled by volatility (Sharpe ratios) also differ. High-BE-ME portfolios have higher Sharpe ratios. In particular, the highest-BE-ME portfolio has a Sharpe ratio that is higher than that of the overall equity market. A
Intangible Risk
Table 4.1
123
Properties of portfolios sorted by book-to-market (B-M) Portfolio
Average return (%) Standard return (%) Average B-M Average R&D/Sales Sharpe ratio Correlation with consumption
(1)
(2)
(3)
(4)
(5)
Market
6.48 37.60 0.32 0.12 0.18 0.20
6.88 32.76 0.62 0.06 0.20 0.18
8.90 29.64 0.84 0.04 0.28 0.20
9.32 31.66 1.12 0.03 0.28 0.20
11.02 35.50 2.00 0.03 0.30 0.21
7.23 32.94 0.79 0.08 0.21 0.20
Notes: Portfolios formed by sorting portfolios into 5 portfolios using NYSE breakpoints from Fama and French (1992). Portfolios are ordered from lowest to highest average bookto-market value. Data from 1947Q1 to 2001Q4 for returns and B-M ratios. R&D-sales ratio is from 1950 to 2001. Returns are converted to real units using the implicit price deflator for nondurable and services consumption. Average returns and standard deviations are calculated using the natural logarithm of quarterly gross returns multiplied by four to put the results in annual units. Average book-to-market are averaged portfolio book-to-market or the period computed from COMPUSTAT. Average R&D-sales also computed from COMPUSTAT. The Sharpe ratio is based on quarterly observations. Correlation with consumption is measured as the contemporaneous correlation between log returns and log consumption growth.
portfolio with an even larger Sharpe ratio can be constructed by taking a long position in the high-BE-ME portfolio and offsetting this with a short position in the low-BE-ME portfolios. This occurs because there is substantial positive correlation across the portfolios. The spectacular Sharpe ratios that are possible have been noted by many authors; see MacKinlay (1995), for example. The consumption-based capital asset-pricing model predicts that differences in average returns across the five portfolios are due to differences in the covariances between returns and consumption. That is, portfolios may have low returns because they offer some form of consumption insurance. The last row of table 4.1 displays the correlation between each quarterly portfolio return and the quarterly growth rate of aggregate real expenditures on nondurables and services. Because there is little difference in the volatility across portfolios, there is little difference in the implied covariance between returns and consumption growth. This measure of risk therefore implies little difference in required returns across the portfolios. The high Sharpe ratios and small covariances with consumption are known to make the consumption insurance explanation problematic; see Hansen and Jagannathan (1991) and Cochrane and Hansen (1992), for example. We will revisit this explanation, but in the context of dividend risk instead of return risk. Differences in BE-ME are partially reflected in differences in future cash flows. Table 4.2 presents some basic properties of the dividend cash flows from the portfolios. These dividends are imputed from the CRSP return
124 Table 4.2
Lars Peter Hansen, John C. Heaton, and Nan Li Cash flow properties of portfolios sorted by book-to-market value Portfolio
Average (log) dividend growth (%) Standard (log) dividend growth (%) Average log (D/P) Average P/D
(1)
(2)
(3)
(4)
(5)
Market
1.78 13.50 –3.78 49.12
1.68 17.09 –3.41 33.01
3.13 11.71 –3.23 27.00
3.54 12.05 –3.11 23.96
4.48 17.76 –3.15 24.82
3.09 23.99 n.a. n.a.
Notes: n.a. not available. D/P dividend-price ratio. P/D price-dividend ratio.
files. Each month and for each stock, CRSP reports a return without diviwo dends, denoted R t1 Pt1/Pt , and a total return that includes dividends, dew noted R t1 (Pt1 Dt1)/Pt . The dividend yield Dt1/Pt is then imputed as Dt1 R wt1 R wo t1. Pt Changes in this yield along with the capital gain in the portfolio are used to impute the growth in portfolio dividends. This construction has the interpretation of following an initial investment of $1 in the portfolio and extracting the dividends while reinvesting the capital gains. From the monthly dividend series we compute quarterly averages. Real dividends are constructed by normalizing nominal dividends on a quarterly basis by the implicit price deflator for nondurable and service consumption taken from the national income and product accounts. Finally, some adjustment must be made to quarterly dividends because of the pronounced seasonal patterns in corporate dividend payout. Our measure of quarterly dividends is constructed by taking an average of the logarithm of dividends in a particular quarter and over the previous three quarters. We average the logarithm of dividends because our empirical modelling will be linear in logs. Table 4.2 reports statistics for this constructed proxy of log dividends. Notice from table 4.2 that the low-BE-ME portfolios also have low dividend growth. Just as there is considerable heterogeneity in the measures of average returns, there is also considerable heterogeneity in growth rates. An important measurement question that we will explore is whether these ex post sample differences in dividend growth are something that is fully perceived ex ante or whether some of this heterogeneity is the outcome of dividend processes with low-frequency components. We suspect that much of the observed heterogeneity in dividend growth was known a priori by investors and hence this heterogeneity will reflect potential differences in risk. Some of our calculations that follow will treat this heterogeneity as reflecting in part differences in long-run risk. In section 4.4, we turn to a discussion of risk measurement for these cash flows.
Intangible Risk
125
A potential concern in evaluating dividends at the portfolio level is that portfolio formation could lead to artificial differences in the long-run risk properties of portfolio cash flows that are not easily interpretable. For example, it may appear that portfolios biased toward investing in stocks with low dividend growth will necessarily have low growth rates in cash flows and therefore little long-run exposure to economic growth. Notice, however, that the implied dividend growth rates in the constructed portfolios depend in part on the relative prices of stocks bought and sold as the composition of the portfolios change over time. Stocks with temporarily low dividend growth rates will have relatively high price appreciation, which can offset the low growth rates. Thus, the portfolio formation might actually result in a more stable dividend or cash flow. 4.4 Dividend Risk In asset pricing it is common to explore risk premia by characterizing how returns covary with a benchmark return, as in the capital asset pricing model (CAPM), or, more generally, how returns covary with a candidate stochastic discount factor. The focus of the resulting empirical investigations are on return risk, in contrast to dividend or cash-flow risk. Recently there has been an interest in understanding cash-flow risk using linear time series methods. Examples include the work of Bansal, Dittmar, and Lundblad (2002a,b) and Cohen, Polk, and Vuoteenaho (2002). We follow this literature by using linear time series methods to motivate and construct a measure of dividend risk. To use linear time series methods requires a log-approximation for present discounted value formulas as developed by Campbell and Shiller (1988a,b).8 Write the one-period return in an equity as P D D D , 1 Pt1 Dt1
Rt1
t1
t1
t1
t
Pt Dt
Pt
where Pt is the price and Dt is the dividend. Take logarithms and write
1 Pt1 rt1 log (dt1 dt ) ( pt dt ), Dt1 where lowercase letters denote the corresponding logarithms. Next, approximate 8. Santos and Veronesi (2001) suggest a model for studying cash flow risk that avoids linear approximation by instead adopting a nonlinear model of income shares.
126
Lars Peter Hansen, John C. Heaton, and Nan Li
1 Pt1 1 log log[1 exp( p d )] ( pt1 dt1 p d ), Dt1 1 exp(d p ) where p – d is the average logarithm of the price dividend ratio. Use this approximation to write (12)
rt1 (dt1 dt ) ( pt1 dt1) ( pt dt),
where 1 . 1 exp(d p ) As shown by Campbell and Shiller (1988a), this approximation is reasonably accurate in practice. Treat equation (12) as a difference equation in the log price dividend ratio and solve this equation forward:
pt dt ∑ j (dt1j dtj rt1j ) . 1 j0
This relation says that a time t 1 shock to current and future dividends must be offset by the same shock to returns in the sense of a present discounted value. The discount factor will differ depending on the average logarithm of the dividend-price ratio for the security or portfolio. The present discounted value restriction is mathematically the same as that developed by Hansen, Roberds, and Sargent (1991) in their examination of the implications of present-value budget balance. To understand this restriction, posit a moving-average representation for the dividend growth process and the return process: dt dt1 (L)wt d rt (L)wt r . Here {wt} is a vector, independently and identically distributed (i.i.d.) standard normal process and
(z) ∑ j z j, j0
(z) ∑ j z j, j0
where j and j are row vectors. Since pt – dt depends only on date-t information, future shocks must be present-value neutral: (13)
() () 0.
For instance, if returns are close to being i.i.d., but not dividends, then (14)
(0) ().
Intangible Risk
127
The discounted dividend response should equal the return response to a shock. Since in fact returns are predictable, we will present some evidence that bears on this approximation. To evaluate the riskiness of each portfolio’s exposure to the shocks wt , we also measure the impact of the shocks on consumption growth: ct ct1 (L)wt c , where ct is the logarithm of aggregate consumption. To measure the economic magnitude of return responses, Hansen and Singleton (1983) used the familiar representative agent model with constant relative risk aversion (CRRA) utility: (15)
E
C Ct1 t
R jt1Ft 1,
where is the coefficient of relative risk aversion. Under a lognormal approximation, the return on portfolio j satisfies r jt1 j (L)wt1 jt . Euler equation (15) then implies that jr satisfies (16)
j (0) j (0) E[r jt1Ft ] r ft (0) j (0). 2
where r ft is the logarithm of the risk-free return. Whereas Hansen and Singleton (1983) used equation (16) to study directly one-period return risk, Bansal, Dittmar, and Lundblad (2002a,b) instead looked at the discounted dividend risk. In most of this paper we follow Bansal, Dittmar, and Lundblad (2002a) and treat j( ) as a measure of risk in dividend growth. We refer to this measure as discounted dividend risk. The present-value relation (abstracting from approximation error) implies that this combination of dividend responses to future shocks must be offset by the corresponding return responses. As tends to 1, we refer to the limit (1) as long-run risk. We will not include returns in our vector autoregressions for the reasons explained by Hansen, Roberds, and Sargent (1991).9 We will sometimes include dividend-price ratios in the vector autoregressive systems, however. These ratios are known to be informative about future dividends. Write implied moving-average representation as p jt d jt j (L)wt jp . We may then back out a return process (approximately) as 9. Hansen, Roberds, and Sargent (1991) show that when returns are included in a VAR, the restrictions in equation (13) cannot be satisfied for the shocks identified by the VAR unless the VAR system is stochastically singular.
128
Lars Peter Hansen, John C. Heaton, and Nan Li
r jt j (L)wt jr , where j (z) ( z) j (z) j (z). It follows from this formula for j that the present-value-budget balance restriction in equation (13) is satisfied by construction and is not testable. To summarize, we use j () as our measure of discounted dividend risk. When dividend-price ratios are also included in the VAR system, the present-value budget balance restriction in equation (13) is automatically satisfied. By construction, discounted return risk and discounted dividend risk coincide. 4.5 Measuring Dividend Risk Empirically In this section we evaluate the riskiness of the five BE-ME portfolios using the framework of section 4.4. Riskiness is measured by the sensitivity of portfolio cash flows and prices to different assumptions made to identify aggregate shocks. Since we are interested in the long-run impact of aggregate shocks, we consider several VAR specifications that make different assumptions about the long-run relationships between consumption, portfolio cash flows, and prices. In particular, we examine the effects of moving from the assumption of little long-run relationship between aggregates and portfolio cash flows to the assumption that there is a cointegrated relationship between aggregate consumption and cash flows. 4.5.1 Empirical Model of Consumption and Dividends To measure dividend risk we require estimates of and . We describe how to obtain these using VAR methods for consumption and dividends. The least restrictive specification we consider is A 0 xt A 1 xt1 A 2 xt2 . . . A xt B 0 wt , where consumption is the first entry of xt and the dividend level is the second entry. The vectors B0 and B1 are two-dimensional, and similarly the square matrices Aj , j 1, 2, . . . , are two by two. The shock vector wt has mean zero and covariance matrix I. We normalize A 0 to be lower triangular with positive entries on the diagonals. Form: A(z) A 0 A 1 z A 2 z 2 . . . A z . We are interested in specification in which A(z) is nonsingular for z 1. We identify the first shock as the consumption innovation, and our aim is to measure the discounted average response: () (1 ) [0
1] A()1.
Intangible Risk
129
We use these formulas to produce long-run risk measures for each back-tomarket portfolio. We also compute the limiting responses as tends to unity. While we want to allow for A(z) to be singular at unity, we presume that (1 – z)A(z)–1 has a convergent power series for a region containing z 1. This is equivalent to assuming that both consumption and dividends are (at least asymptotically) stationary in differences. The limiting responses are thus contained in the matrix (1 z)A(z)1z1. When A(1) is nonsingular, the limiting response matrix is identically zero, but it will be nonzero when A(1) is singular. The matrix A(1) is nonsingular when the VAR does not have stochastic growth components. When it is singular, the vector time series will be cointegrated in the sense of Engle and Granger (1987). We will explore specifications singular specifications of A(1) in which difference between log consumption and log dividends is presumed to be stationary. 4.5.2 Data Construction For our measure of aggregate consumption we use aggregate consumption of nondurables and services taken from the national income and product accounts (NIPA). This measure is quarterly from 1947Q1 to 2002Q4, is in real terms, and is seasonally adjusted. Portfolio dividends were constructed as discussed in section 4.3. For portfolio prices in each quarter we use end-of-quarter prices. Motivated by the work of Lettau and Ludvigson (2001) and Santos and Veronesi (2001), in several of our specifications we allow for a second source of aggregate risk that captures aggregate exposure to stock market cash flows. This is measured as the share of corporate cash flows in aggregate consumption and is measured as the ratio of corporate earnings to aggregate consumption. Corporate earnings are taken from NIPA. In all of the specifications that follow, the VAR models were fit using five quarters of lags.10 See appendix A for more details of the data construction. 4.5.3 Bivariate Model of Consumption and Dividends First we follow Bansal, Dittmar, and Lundblad (2002b) and consider bivariate regressions that include aggregate consumption and the dividends for each portfolio separately. Table 4.3 reports results for the case where the state variable xt is given by11 10. We also conducted some runs with nine lags. With the exception of the results for portfolio 1 when using aggregate earnings, the results were not greatly effected. 11. Notice that we consider separate specifications of the state variable for each portfolio. Ideally, estimation with all of the portfolio cash flows would be interesting, but because of data limitations this is not possible.
130
Lars Peter Hansen, John C. Heaton, and Nan Li
Table 4.3
Discounted responses of portfolio dividends in a log-level VAR Portfolio
Discount factor
(1)
(2)
(3)
(4)
(5)
0.9943
0.9918
0.9902
0.9889
0.9894
OLS estimator 10 percentile 30 percentile Median 70 percentile 90 percentile
0.14 –0.05 0.07 0.14 0.21 0.34
A: Consumption shock 0.34 0.22 0.06 0.01 0.20 0.15 0.30 0.22 0.43 0.30 0.72 0.41
0.53 0.38 0.47 0.53 0.61 0.75
1.32 0.98 1.16 1.33 1.53 1.92
OLS estimator 10 percentile 30 percentile Median 70 percentile 90 percentile
0.68 0.45 0.56 0.67 0.85 1.32
B: Dividend shock 1.23 0.75 0.66 0.56 0.87 0.65 1.09 0.74 1.45 0.86 2.23 1.12
0.58 0.42 0.50 0.58 0.69 0.96
1.19 0.78 0.98 1.18 1.48 2.29
(17)
xt
d . ct t
For notational convenience we do not display the dependence of xt and hence A(z) on the choice of portfolio. To simplify the interpretation of the shock vector wt , we initially restrict the matrix A(z) to be lower triangular. Under this restriction, consumption depends only on the first shock while dividends depend on both shocks. This recursive structure presumes that consumption is not “caused” by dividends in the sense of Granger (1969) and Sims (1972).12 The first row of panel A shows that according to the discounted measure of dividend risk, the high book-to-market returns have a larger measure of dividend vis-à-vis the low book-to-market returns. The differences are quite striking in that the response to a consumption shock increases almost ten times in comparing portfolio 1 and portfolio 5. This ordering was noted by Bansal, Dittmar, and Lundblad (2002a), using a different set of restrictions on the VAR.13 To illustrate the portfolio differences more fully, consider figure 4.2. This figure displays the implied responses of log dividends to a consumption shock. The discounted measure of risk reported in table 12. When this restriction on A(z) is relaxed, the measured discounted responses that we report below to a consumption shock are essentially the same. 13. Bansal, Dittmar, and Lundblad (2002a) consider two types of regressions. In the first, dividend growth is regressed on an eight-quarter moving average of past consumption growth. In the second, detrended dividends are regressed on contemporaneous detrended consumption and four leads and lags of consumption growth.
Intangible Risk
Fig. 4.2
131
Impulse responses to a consumption shock
Notes: This figure reports the impulse response functions for each of the portfolios and for consumption obtained by estimating the log-level version of the VAR in which xt has entries ct and dt. The matrix A(z) is restricted to lower triangular.
4.3 is a weighted average of the responses depicted in figure 4.2. Notice in particular that portfolio 5 has a substantially different response to a consumption shock with a pronounced peak response at about the ten-quarter horizon. The half-lives of the discount factors range between sixteen years for portfolio 4 to thirty years for portfolio 1. As a result, the discounted average responses weight heavily tail responses. Table 4.3 also reports Bayesian posterior percentiles for the discounted consumption risk computed using the method described in appendix B. These percentiles provide a measure of accuracy. Figure 4.3 plots the 10th, 50th, and 90th percentile for the individual impulse responses. Notice that these measures of accuracy imply substantial sampling error in the estimated discounted responses. For example, consider the results for portfolios 1 and 5 as displayed in figure 4.3. Although the estimated short-run response to a consumption shock is quite different across these two portfolios, the confidence intervals narrow this difference substantially. Next we explore singular specifications of A(1) in which the difference between log consumption and log dividends is presumed to be stationary.
132
Fig. 4.3
Lars Peter Hansen, John C. Heaton, and Nan Li
Bayesian percentile for impulse responses to a shock to consumption
Notes: This figure gives the 10th, 50th, and 90th percent percentile for the impulse response function depicted in figure 4.2. The upper left panel depicts the consumption response, and the other five panels depict the responses for each of the five portfolio cash flows.
Again we use VAR methods, but now the first variable is the first-difference of log consumption and the second is the difference between log consumption and log dividends. This specification is in effect a restriction on A(z) and in particular restricts A(1) to be rank one. We continue to assume that A(z) is lower triangular. Since the ratio of dividends to consumption is presumed to be stationary for all portfolios, the long-run response of dividends to a consumption shock is the same. The discounted responses can still differ, however. As table 4.4 demonstrates, when dividends and consumption are restricted to respond the same way to permanent shocks, the discounted risk measures increase relative to those computed without restricting the rank of A(z). The limiting response is about .82 for all portfolios. The discounted responses of portfolios 1, 2, and 3 to a consumption shock are all pulled toward this value. The discounted risk measures for portfolios 4 and 5 are also increased by imposing this limiting value on the impulse response. In figure 4.4 we depict the impulse responses when cointegration is imposed and consumption is restricted. Comparing the impulse responses to a consumption shock in this figure to those in figure 4.2, we see that while tail
Table 4.4
Discounted responses of portfolio dividends in a cointegrated VAR Portfolio (1)
(2)
(3)
(4)
(5)
Discount factor
0.9943
0.9918
0.9902
0.9889
0.9894
OLS estimator 10 percentile 30 percentile Median 70 percentile 90 percentile
A: Consumption and permanent shock 0.75 0.89 0.75 0.13 0.40 0.27 0.59 0.71 0.60 0.82 0.91 0.80 1.04 1.11 0.99 1.39 1.44 1.31
1.11 0.80 0.97 1.12 1.28 1.61
2.03 1.24 1.62 1.97 2.43 3.41
OLS estimator 10 percentile 30 percentile Median 70 percentile 90 percentile
2.60 1.31 1.72 2.19 2.86 4.15
1.14 0.72 0.91 1.10 1.40 2.09
2.43 1.49 1.90 2.30 2.89 4.21
B: Dividend and transitory shock 2.06 2.18 1.40 1.24 1.68 1.59 1.96 1.92 2.30 2.36 2.91 3.13
Fig. 4.4 Impulse responses to a consumption shock for the cointegrated specification Notes: The impulse responses are identified by a VAR estimated with ct – ct–1 and ct – dt as the components of xt. The matrix A(z) is restricted to be lower triangular.
134
Lars Peter Hansen, John C. Heaton, and Nan Li
Table 4.5
Discounted responses of portfolio dividends to permanent and transitory shocks Portfolio (1)
(2)
(3)
(4)
(5)
Discount factor
0.9943
0.9918
0.9902
0.9889
0.9894
OLS estimator 10 percentile 30 percentile Median 70 percentile 90 percentile
A: Discounted responses, permanent shock 3.62 2.86 2.76 1.61 1.62 1.16 2.36 2.20 0.60 3.08 2.67 0.80 4.07 3.22 0.99 5.88 4.07 1.31
1.50 0.45 0.98 1.35 1.89 3.09
–0.14 –1.41 –0.47 –0.05 0.30 0.86
OLS estimator 10 percentile 30 percentile Median 70 percentile 90 percentile
B: Discounted responses, transitory shock –0.63 –0.56 –0.66 –1.36 –1.26 –1.49 –0.92 –0.87 –0.97 –0.68 –0.62 –0.70 –0.45 –0.33 –0.42 0.02 0.15 0.08
–1.01 –1.45 –1.03 –0.83 –0.64 –0.38
1.68 1.13 1.34 1.52 1.76 2.25
OLS estimator
C: Covariance between consumption and discounted response 0.29 0.36 0.32 0.58
0.63
properties of the impulse responses have been altered, portfolio 5 continues to have a peak response at about ten quarters.14 For the cointegrated systems, we consider an alternative identification scheme. We do not restrict A(z) to be lower triangular, but we instead identify a permanent and transitory shock following an approach suggested by Blanchard and Quah (1989). In this case the permanent shock is the shock that has the same nonzero long-run effect on consumption and dividends. The second shock is chosen to be orthogonal to the permanent shock and has zero long-run impact on consumption and dividends by construction. We also normalize the shocks so that positive movements in both shocks induce positive movements in consumption. Since both shocks influence consumption, both shocks are pertinent in assessing the riskiness of the implied cash flows. The results are reported in table 4.5, and the impulse responses are depicted in figure 4.5. The discounted responses to the permanent consumption shock differ from the response to the previously identified consumption shocks. For instance, portfolio 5 now has an initial negative response to permanent consumption shock, and this persists for many periods. The discounted response remains negative for this portfolio even though the 14. Given the data transformation, the Bayesian posterior percentile for the VAR are based on a different specification of the prior coefficient distribution over comparable coefficients.
Intangible Risk
Fig. 4.5
135
Impulse responses to a permanent and transitory shock
Notes: The impulse responses are identified by a VAR estimated with ct – ct–1 and ct – dt as the two components of xt. The permanent shock is identified as the shock that alters consumption permanently but is exactly offset in the long run by movement dividends. The transitory shock is uncorrelated with the permanent shock and has a transitory impact on both consumption and dividends. A line with circles is used to depict the portfolio 1 response, a line with squares for portfolio 2, a line with diamonds for portfolio 3, a line with triangles for portfolio 4, and a solid line for portfolio 5.
limiting response is by construction positive. Thus, holding portfolio 5 appears to provide some insurance against consumption risk, which makes the large mean return appear puzzling. The other portfolios’ dividends respond positively to this consumption shock. The portfolio 5 response to transitory shock is always positive, however. This is in contrast to the other four portfolios, which have negative responses to this shock. The transitory shock contributes to the discounted riskiness of the dividends. A defect of this identification scheme is that the identified shocks differ depending upon which portfolio we use in the empirical investigation. To address this concern the final panel of table 4.5 reports values of the term (0) () for each portfolio. This gives the conditional covariance between consumption growth and the discounted dividends. This accumulation of the effects of the two shocks mirrors our previous results. Risk increases
136
Lars Peter Hansen, John C. Heaton, and Nan Li
from portfolio 1 to portfolio 5, although the largest increase is from portfolio 3 to 4 and then from portfolio 4 to 5. The discounted dividend risk measures suggest that the high book-tomarket portfolios have more longer-run covariation with consumption as measured by discounted responses. As emphasized by Bansal, Dittmar, and Lundblad (2002a), this provides a qualitative explanation for the heterogeneity in mean returns. The discounted dividend riskiness of the high book-to-market returns must be compensated for by a higher mean return. This claim is qualitative for at least two reasons. First, if returns are predictable, then the conditional means of returns will not equal the unconditional means reported in table 4.1. Second, while the discounted dividend response is approximately equal to discounted response of cumulative returns, if returns are predictable then the discounted return response will differ from the one-period return response that is pertinent for asset pricing. In the next section we report some evidence on return predictability. 4.6 Results with an Additional Aggregate Shock In our final VAR specification we consider what happens when an additional aggregate shock is added. In this specification xt is given by
ct
xt
et
pt
,
dt
where et is corporate profits at time t. We are led to consider this latter variable by the empirical investigations of Lettau and Ludvigson (2001) and Santos and Veronesi (2001). These authors argue for the addition of an aggregate share variable to help account for asset values. For example, Santos and Veronesi argue that exposure to stock market risk is affected by the contribution of corporate payouts to aggregate consumption. We restrict A(0) to be a lower-triangular matrix with positive entries on the diagonal. We refer to the first shock as the consumption shock and the second one as the earnings shock. Following Santos and Veronesi, we also consider a model in which the earnings-consumption ratio is stationary.15 In this specification we restrict the upper left two-by-two block of A(1) to be singular by running a VAR using the first difference of log consumption and the contemporary difference between earnings and consumption as data. 15. Santos and Veronesi (2001) do not use linear VAR model but rather pose a share model in which the counterpart to the earnings-consumption ratio is restricted to be between zero and one.
Intangible Risk
137
Fig. 4.6 Impulse responses of consumption and earnings to a consumption shock and an earnings shock Notes: The impulse responses without imposing cointegration were constructed from a bivariate VAR with entries ct, et. These responses are given by the dashed lines ( ). Solid lines ( ) are used to depict the impulse responses estimated from a cointegrated system. The impulse response functions are computed from a VAR with ct – ct–1 and ct – dt as time series components.
To avoid parameter proliferation, we restrict the dynamics of the aggregate variables ct and et to not be Granger-caused by the individual portfolio dividends and prices. That is, we restrict A(z) to be block lower triangular. We consider the discounted responses to a shock to consumption and a shock to earnings. Figure 4.6 displays how consumption and earnings respond to the respective shocks in the two models. The dashed lines are for the cases where there is no restriction on the relative growth of earnings and consumption (“without cointegration”). The sold lines are for the cases where any permanent shock has the same long-run impact on consumption and earnings (“with cointegration”). Under our identification of shocks, the shock to earnings has no immediate effect on consumption. In section 4.4 we assumed that preferences over consumption are separable over time. The resulting pricing relationship, equation (16), predicts that any exposure of
138
Lars Peter Hansen, John C. Heaton, and Nan Li
cash flows to the earnings shock will have no impact on average security returns. Figure 4.6 shows that there is a substantial change in the long-run variation of consumption when we move to the model with cointegration. Both shocks have persistent effects on consumption and hence earnings (through the cointegration restriction). Moreover, the earnings shock now has a quantitatively important impact on consumption. More general models of preferences predict that the effect of a shock on future consumption will have a significant effect on risk exposure. One such example is the recursive utility specification considered, for example, by Epstein and Zin (1989). In this model the effect of a shock on future consumption affects attitudes to risk through the influence of the shock on the continuation value of utility. For this reason the shock to aggregate earnings could be a very important shock in accounting for the riskiness of cash flows. We now explore the implied responses to the financial variables. The discounted responses of portfolio dividends and prices are reported in table 4.6. The discounted dividend responses to a consumption shock in the model without cointegration are very similar to those we reported in table 4.3. This is to be expected because the earnings shock has little impact on consumption for this system. In the model with cointegration, the response to a consumption shock increases for each portfolio, but the ordering across portfolios is not clear. For example, portfolio 3 now has the largest response. Further, notice that with cointegration the discounted responses to an earnings shock are much larger than without cointegration. In general, the mixed results of table 4.6 indicate substantial sensitivity in measures of dividend risk both to the specification of the long-run dynamics of the series and to the definition of shocks. Panel A of table 4.6 does provide some evidence that differences in the discounted response to a consumption shock could provide an explanation for differences in the observed average returns of the portfolios. We now examine the more ambitious quantitative question of how much risk aversion is required for this explanation to work. Using equation (16) we can compute an implied value of from the difference in the risk premia between any two portfolios. We use the discounted dividend responses as estimates of j(0) based on approximation (14). Recall that this presumes that there is little predictability in returns, about which we will have more to say subsequently. In what follows we use the risk premium of each portfolio relative to that of portfolio 5 to calculate , ignoring the restriction that the same value of should be used in explaining the entire cross-section of average returns. Our goal is merely to provide a convenient metric to evaluate the quantitative significance of observed differences in discounted dividend responses. Results are reported in table 4.7 based on the risk measures of panel A of table 4.6. The discounted dividend responses do imply higher returns for
Table 4.6
Discounted responses in four variate VAR with aggregate profits Portfolio
Discount factor
(1)
(2)
(3)
(4)
(5)
0.9943
0.9918
0.9902
0.9889
0.9894
OLS estimator 10 percentile 30 percentile Median 70 percentile 90 percentile
A: Dividends to consumption shock (without cointegration) 0.05 0.17 0.42 0.57 –0.34 –0.08 0.25 0.38 –0.04 0.09 0.34 0.48 0.07 0.18 0.41 0.57 0.16 0.26 0.50 0.68 0.29 0.42 0.64 0.90
1.11 0.75 0.95 1.12 1.33 1.81
OLS estimator 10 percentile 30 percentile Median 70 percentile 90 percentile
B: Dividends to earnings shock (without cointegration) 0.02 0.08 0.32 0.00 –0.43 –0.26 0.10 –0.26 –0.09 –0.03 0.23 –0.11 0.05 0.09 0.33 0.00 0.15 0.19 0.44 0.12 0.30 0.37 0.64 0.38
0.04 –0.47 –0.17 0.05 0.31 0.85
OLS estimator 10 percentile 30 percentile Median 70 percentile 90 percentile
C: Dividends to consumption shock (with cointegration) 0.64 1.20 1.56 1.47 –0.05 0.32 1.00 0.98 0.70 0.92 1.27 1.23 1.04 1.26 1.53 1.46 1.39 1.66 1.93 1.75 2.22 2.55 2.80 2.44
1.37 0.54 0.92 1.25 1.72 2.88
OLS estimator 10 percentile 30 percentile Median 70 percentile 90 percentile
D: Dividends to earnings shock (with cointegration) 0.46 1.22 1.99 1.18 –0.62 –0.09 1.17 0.43 0.52 0.82 1.57 0.81 1.00 1.33 1.97 1.14 1.52 1.94 2.55 1.59 2.80 3.32 4.05 2.64
0.32 –0.91 –0.30 0.16 0.81 2.49
Table 4.7
Implied value of risk aversion coefficient Portfolio
(1)
(2)
(3)
(4)
222
228
159
163
Note: These values of are computed using the discounted risk measures reported in panel A of table 4.6.
140
Lars Peter Hansen, John C. Heaton, and Nan Li
Table 4.8
Return responses in four variate VAR with aggregate profits Portfolio (1)
(2)
(3)
(4)
(5)
Discount factor
0.9943
0.9918
0.9902
0.9889
0.9894
On impact Discounted
2.15 0.05
A: No cointegration 1.89 1.75 0.17 0.42
1.77 0.57
2.24 1.11
On impact Discounted
1.98 0.64
B: Cointegration 1.93 1.78 1.20 1.56
1.75 1.47
2.14 1.37
the higher book-to-market portfolios, but the magnitude of the risks is small. As a result, we calculate high implied coefficients of relative risk aversion. This result has been noted extensively in the finance literature. By including prices along with dividends in the VAR, we have ensured that the present-value budget balance restriction is satisfied by construction. Any shock to log dividends must be offset by a shock to log returns through discounting provided log returns are measured via the log approximation of section 4.4. While we have focused our attention on how dividends respond to economically meaningful shocks, we now consider returns. In table 4.8 we compare the discounted return response (equal to the discounted dividend response from panel A of table 4.6) to the on-impact return response to a consumption shock. It is this latter response that is pertinent for the consumption-based asset pricing model described previously. As previously mentioned, in models based on recursive utility, the intertemporal composition of risk is known to matter. While the discounted return response increases across the portfolios, the same is not true of the onimpact return response. As in Hansen and Singleton (1983), the on-impact return response to consumption does not help to explain the observed heterogeneity in a cross section of portfolio returns.16 The discounted response is still potentially of interest in models where there is some type of delay in the consumption response due to adjustment costs or some type of misspecification of the model (see, e.g., Daniel and Marshall 1997). As the results of table 4.8 indicate, there is potentially important predictability in returns. This predictability has led the finance literature to decompose variation in price-dividend ratios into components based on predicted returns and predicted dividend growth (see, e.g., Campbell 1991 and Cochrane 1991a). We apply this decomposition to the book-to-market portfolios. 16. Hansen and Singleton (1983) based their findings on industry portfolios and not the book-to-market portfolios studied here.
Intangible Risk
141
As before, instead of using actual returns, we use the implied returns from the log-linear approximation. In this case, the present-value formula,
pt dt ∑ j (dt1j dtj rt1j ) , 1 j0
holds by construction provided the growth in the estimated VAR is dominated by the discount factor . Of course, this same relation holds once expectations are taken:
pt dt ∑ jE(dt1j dtjFt ) ∑ jE(rt1jFt ) . 1 j0 j0
The right-hand side gives us an ad hoc decomposition of the pricedividend ratio in terms of predicted future dividends and predicted future returns. We use this decomposition to account for the impulse-response functions of the price-dividend ratio. Any response of the price-dividend ratio to a shock must be due to the differential response of discounted expected dividend growth and discounted expected returns to that same shock.17 In figure 4.7 we plot the impulse response functions for the portfolio price-dividend ratios for a consumption shock for each of the five portfolios. Neither return nor dividend predictability dominates explanations of price-dividend variation across portfolios. While the return contribution is much more pronounced for portfolio 2, the dividend contribution is particularly important for portfolio 5. 4.7 Conclusions In this paper we reviewed two findings pertinent for using asset market data to make inferences about the intangible capital stock. We presented evidence familiar from the empirical finance literature that returns are heterogeneous when firms are grouped according to their ratio of market equity to book equity. This evidence suggests that there are important differences in the riskiness of investment in measured capital vis-à-vis intangible capital. This has potentially important ramifications for how to build explicit economic models to use in constructing measurements of the intangible capital stock. A risk-based interpretation of return heterogeneity requires more than just a model with heterogeneous capital. It also requires a justification for the implied risk premiums. There has been much interest recently in the finance literature on using VAR methods to understand riskiness of serially correlated cash flows or dividends. The discounted dividend risk-measures using VAR methods find that high book-to-market portfolio returns have more economically relevant risk. The discounted responses are larger for 17. See appendix C for a more complete discussion of this calculation.
142
Fig. 4.7
Lars Peter Hansen, John C. Heaton, and Nan Li
Decomposition of price-dividend responses to a consumption shock
Notes: These estimates were constructed from a VAR with entries ct, et, pt, and dt. The matrix A(z) is restricted to be lower triangular in blocks of two, and A(0) is lower triangular. For each portfolio, a solid line ( ) depicts the impulse response of p – d, a dashed line ( ) depicts the impulse response of the dividend component of p – d, and a dashed and dotted line ( ) depicts the impulse response of return component of p – d.
these portfolios. Moreover, the dividend response to a consumption shock for portfolio 5, a portfolio of the highest book-to-market returns, stands out relative to other dividend responses. The impulse response for this portfolio has a pronounced peak at around ten quarters. This peak is present in many of the VAR specifications of shocks. The shock responses are very different when we identify a permanent-transitory decomposition of shocks to consumption and dividends, but portfolio 5 still looks different relative to the other portfolios. Further, our results are sensitive to other specifications of the long-run dynamics and to our identification of the shocks. The empirical evidence we report follows the finance literature by focusing on the claims of equity holders. As emphasized by Hall (2001), what is pertinent for measurement purposes is the combined claims of bond holders and equity holders. It is the overall value of the firm or enterprise that
Intangible Risk
143
is pertinent. Similarly, this analysis focuses on dividends as the underlying claims of equity holders and not on overall cash flows of the firms. The risk associated with broader-based cash flow measures are of considerable interest for future research.
Appendix A Data Appendix Consumption We use aggregate consumption of nondurables and services taken from NIPA18 (table 2.2). The quarterly data are seasonally adjusted at annual rates, deflated by the implicit price deflator for nondurable and services consumption. Corporate Earnings Corporate earnings are measured as corporate profits with inventory valuation and capital consumption adjustments from NIPA (table 1.14), the quarterly data are deflated by the implicit price deflator for nondurable and services consumption. BE-ME Portfolios 19 We follow Fama and French (1992) in constructing portfolios ranked by book-to-market ratios. Five BE-ME portfolios are formed at the end of each June using New York Stock Exchange (NYSE) breakpoints. The BE used in June of year t is the book equity for the last fiscal year end in t – 1. ME is price times shares outstanding at the end of December of t – 1. We use all NYSE, American Stock Exchange (AMEX), and Nasdaq stocks for which we have ME for December of t – 1 and June of t, and BE for t – 1. For each stock, monthly returns with and without dividends and monthly market values are taken from the CRSP monthly stock data set. We take annual BE data from 1950 to 2001 from the CRSP/COMPUSTAT merged industrial data set.20 We thank Kenneth French for providing us with the annual BE data from 1926 to 1950. The two data sets are merged together with the CRSP data set using CRSP Permanent Company Number 18. Our source is the U.S. Department of Commerce, Bureau of Economic Analysis. 19. SAS codes used to construct the portfolio monthly returns, BE-ME, and R&D-sale are available upon request. 20. CRSP monthly stock data set and CRSP/COMPUSTAT merged data set are from Wharton Research Data Services, University of Pennsylvania.
144
Lars Peter Hansen, John C. Heaton, and Nan Li
(PERMNO). For each portfolio, the monthly returns with and without dividends from July of year t to June of year t – 1 are weighted averages of the stock returns with and without dividends in the same period, using the ME in June of year t as the weights. Portfolio book-to-market ratios in year t are the weighted averages of the stock book-to-market ratios in year t. R&D and sales data are taken from COMPUSTAT, available from 1950 to 2001. In year t, portfolio R&D-sales ratio is the weighted average of the R&D-Sales of each firm in year t. Dividends 21 The dividends’ yield Dt1 /Pt is imputed from portfolio returns with dividend R wt1 (Pt1 Dt1)/Pt and returns without dividend R wo t1 (Pt1 Dt1)/Pt as following Dt1 R wt1 R wo t1. Pt Change in this yield along with the capital gain in the portfolio is used to impute the growth in portfolio dividends Dt1 o Pt Dt1 Pt Rwt1 Rwt1 wo R . Dt Dt Pt1 Rwt Rwt o t Pt1 From the dividend growth we impute the dividend level except for the initial value, t Dt1 Ds1 D1 Π . P0 P0 s1 Ds
From monthly dividend series we compute the quarterly average. We initialize the dividends in 1947Q1 such that the dividend for the market portfolio in 1947Q1 is same as the corporate earning in 1947Q1, and the BE-ME portfolio dividends are proportional to the market portfolio with respect to the market value. We then take twelve months’ trailing average because of the pronounced seasonal pattern in the corporate dividend payout. Our measure of quarterly dividends in quarter t is constructed by taking an average of the logarithm of dividends in quarter t and over the previous three quarters t – 3, t – 2, t – 1. We average the logarithm of dividends instead of levels because our empirical modelling will be linear in logs. This construction has the interpretation of following an initial investment of $1 in the portfolio and extracting the dividends while reinvesting the capital gains. 21. Portfolio dividends, prices, and returns series used in this paper are available upon request.
Intangible Risk
145
Returns and dividends are converted to real units using the implicit price deflator for nondurable and services consumption. Price Deflator The nominal consumption, corporate earning, portfolio returns, and portfolio dividends are deflated by the implicit price deflator for nondurable and services consumption, which is the weighted average of the personal nondurable consumption implicit price deflator P CN (1996 100) t and personal services consumption implicit price deflator P St (1996 100), taken from NIPA table 7.1. The weights are determined by the relative importance of nominal nondurable consumption (CNt ) and service consumption (CSt ); that is, CS P CN t CNt P t CSt P Ct . CNt CSt
Appendix B Bayesian Confidence Intervals Consider the VAR A(L)yt C0 wt , where yt is d-dimensional. The matrix A(0) is lower triangular. We base inferences on systems that can be estimated equation by equation. The wt is a normal random vector with mean zero and covariance matrix I. We follow Sims and Zha (1999) and Zha (1999) by considering a uniform prior on the coefficients. Given the recursive nature of our model, we may follow Zha (1999) by building the joint posterior for all parameters across all equations as a corresponding product. This requires that we include the appropriate contemporary variables on the right-hand side of the equation to ensure that wt1 has the identity as the covariance matrix. In effect we have divided the coefficients of the VAR into blocks that have independent posteriors given the data. We construct posterior confidence intervals for the objects that interest us as nonlinear functions of the VAR coefficients.22 We computed the posterior confidence intervals using Monte Carlo methods using characterizations in Zha (1999) and Box and Tiao (1973). Confidence intervals are centered around the posterior median computed in our simulation; the error bands are computed using the 10th and 90th 22. In making the prior uniform over all coefficients, we follow a suggestion by but not the actual practice in Sims and Zha (1999). There is a minor difference evident in the discussion in Sims and Zha; see page 1142.
146
Lars Peter Hansen, John C. Heaton, and Nan Li
Table 4B.1
Unstable fraction of simulation (%) Portfolio
Log-level VAR Cointegrated VAR Cointegrated unrestricted VAR Four variate VAR Four variate cointegrated VAR
(1)
(2)
(3)
(4)
(5)
2 22 20 11 38
14 11 14 9 24
1 21 19 2 5
1 6 8 7 13
3 10 2 5 19
percentile. Our numbers are based on 100,000 simulations, taking out the unstable systems. The unstable fractions of the simulated systems for different models we used are reported in table 4B.1.
Appendix C Decomposition of Price-Dividend Ratios We decompose the dividend price ratios into two components. Recall that pt dt (L)wt p , where (z )(z) (z) (z), and () (). Write (z) () (z) () (z) . z z The functions (z) () , z
(z) () z
have one-sided power series convergent for z 1. In particular, we interpret (z) () z
Intangible Risk
147
as the transform of the moving-average coefficients for the return contribution to the price-dividend ratio, and (z) () z as the dividend contribution to the price-dividend ratio. The coefficients of these transforms give us a corresponding additive decomposition of the impulse response function for the logarithm of the price-dividend ratio. Consider a VAR in which the logarithm of the price-dividend ratio and the logarithm of dividend growth are included. It is simplest to work with the VAR written as a first-order system: Xt1 Xt wt1 dt d Xt pt p Xt with an expanded state vector. The impulse response function for the pricedividend ratio to the first shock is given by (p d ) j e1, where e1 is a vector of zeros except in the first position, where there is a unit coefficient. The impulse response function for the expected discounted dividend growth is given by d [( I )(I )1] j e1. The impulse response function for the return contribution is [d (1 )(I )1 p ] j e1.
References Atkeson, A., and P. J. Kehoe. 2002. Measuring organizational capital. Federal Reserve Bank of Minneapolis Research Department Staff Report no. 291. Bansal, R., R. Dittmar, and C. Lundblad. 2002a. Consumption, dividends, and the cross-section of equity returns. Duke University, Fuqua School of Business; University of Michigan Business School; University of Indiana, Bloomington, Department of Finance. Working Paper. ———. 2002b. Interpreting risk premia across size, value, and industry portfolios. Duke University, Fuqua School of Business; University of Michigan Business School; University of Indiana, Bloomington, Department of Finance. Working Paper. Blanchard, O. J., and D. Quah. 1989. The dynamic effects of aggregate demand and supply disturbances. American Economic Review 79:655–73. Boldrin, M., L. Christiano, and J. Fisher. 2001. Habit persistence, asset returns, and the business cycle. American Economic Review 91:149–66.
148
Lars Peter Hansen, John C. Heaton, and Nan Li
Box, G. E. P., and G. C. Tiao. 1973. Bayesian inference in statistical analysis. Reading, MA: Addison-Wesley. Brynjolfsson, E., and L. Hitt. 2004. Intangible but not unmeasurable: Some thoughts on the measurement and magnitude of intangible assets. In C. Corrado, J. Haltiwanger, and D. Sichel (Eds.), Measuring Capital in the New Economy. Chicago: University of Chicago Press. Campbell, J. 1991. A variance decomposition for stock returns. Economic Journal 101:157–79. Campbell, J., and R. Shiller. 1988a. The dividend-price ratio and expectations of future dividends and discount factors. Review of Financial Studies 1:195–227. ———. 1988b. Stock prices, earnings, and expected dividends. Journal of Finance 43:661–76. Cochrane, J. H. 1991a. Explaining the variance of price-dividend ratios. Review of Financial Studies 5:243–80. ———. 1991b. Production-based asset pricing and the link between stock returns and economic fluctuations. Journal of Finance 46:209–37. Cochrane, J. H., and L. P. Hansen. 1992. Asset pricing explorations for macroeconomics. In NBER macroeconomics annual, ed. Olivier Blanchard and Stanley Fisher, 115–65. Cambridge: MIT Press. Cohen, R., C. Polk, and T. Vuoteenaho. 2002. Does risk or mispricing explain the cross-section of stock prices? Harvard University, Finance Unit; Northwestern University, Kellogg School of Management; Harvard University, Department of Economics. Working Paper. Daniel, K., and D. Marshall. 1997. Equity-premium and risk-free-rate puzzles at long horizons. Macroeconomic Dynamics 1:452–84. Engle, R. F., and C. W. J. Granger. 1987. Co-integration and error correction: Representation, estimation, and testing. Econometrica 55 (2): 251–76. Epstein, L. G., and S. E. Zin. 1989. Substitution, risk aversion, and the temporal behavior of consumption and asset returns: A theoretical framework. Econometrica 57:937–69. Fama, E., and K. French. 1992. The cross-section of expected stock returns. Journal of Finance 47:427–65. Granger, C. W. J. 1969. Investigating causal relations by econometric models and cross-spectral methods. Econometrica 37 (2): 424–38. Grunfeld, Y. 1960. The determinants of corporate investment. In The demand for durable goods, ed. A. C. Harberger, 211–66. Chicago: University of Chicago Press. Hall, R. E. 2001. The stock market and capital accumulation. American Economic Review 91:1185–1202. Hansen, L. P., and R. Jagannathan. 1991. Implications of security market data for models of dynamic economies. Journal of Political Economics 99 (2): 225–62. Hansen, L. P., and S. J. Richard. 1987. The role of conditioning information in deducing testable implications of asset pricing models. Econometrica 55 (3): 587– 613. Hansen, L. P., W. Roberds, and T. J. Sargent. 1991. Time series implications of present value budget balance and of martingale models of consumption and taxes. In Rational expectations econometrics, ed. L. P. Hansen and T. J. Sargent, 121–61. Boulder, CO: Westview Press. Hansen, L. P., and K. Singleton. 1983. Stochastic consumption, risk aversion, and the temporal behavior of asset returns. Journal of Political Economy 91:249–65. Harrison, J., and D. Kreps. 1979. Martingales and arbitrage in multiperiod security markets. Journal of Mathematical Economics 20:381–408.
Intangible Risk
149
Hayashi, F. 1982. Tobin’s marginal q and average q: A neoclassical interpretation. Econometrica 50:213–24. Jermann, U. J. 1998. Asset pricing in production economies. Journal of Monetary Economics 41:257–75. Lettau, M., and S. Ludvigson. 2001. Resurrecting the (c)capm: A cross-sectional test when risk premia are time-varying. Journal of Political Economy 109:1238– 87. Lucas, R. E., and E. C. Prescott. 1971. Investment under uncertainty. Econometrica 39:659–81. MacKinlay, A. 1995. Multifactor models do not explain deviations from the capital asset pricing model. Journal of Financial Economics 38:3–28. McGrattan, E. R., and E. C. Prescott. 2000. Is the stock market overvalued? Federal Reserve Bank of Minneapolis Quarterly Review 24:20–40. ———. 2003. Taxes, regulations, and the value of U.S. and U.K. corporations. Federal Reserve Bank of Minneapolis Staff Report. Santos, T., and P. Veronesi. 2001. Labor income and predictable stock returns. University of Chicago, Graduate School of Business. Working Paper. Sims, C. A. 1972. Money, income, and causality. American Economic Review 62 (4): 540–52. Sims, C. A., and T. Zha. 1999. Error bands for impulse responses. Econometrica 67: 1113–56. Telser, L. G. 1988. Some determinants of the returns to corporate investment. In Theories of competition, 335–92. Amsterdam: North Holland. Zha, T. 1999. Block recursion and structural vector autoregression. Journal of Econometrics 90:291–316.
Comment
Susanto Basu
The paper by Hansen, Heaton, and Li (henceforth HHL) contains a number of important concepts and interesting results, but the pieces never quite come together to form a coherent picture. At the end of the day, we are left somewhat puzzled as to exactly what the authors were trying to do and what they believe they have actually done. These comments are my attempt to supply those missing connections. HHL wish to contribute to the literature on “intangible capital,” presented recently in its most provocative form by Hall (2001). My interpretation of this paper is that the authors agree that Hall’s approach is potentially valid, but then point out—and test—additional asset-pricing implications of his model in an attempt to assess its plausibility. Hall begins from the observation that in the steady state of a neoclassical model with endogenous capital accumulation, one can infer the quanSusanto Basu is professor of economics at the University of Michigan, a research associate of the National Bureau of Economic Research, and a member of the executive committee of the Conference on Research in Income and Wealth.
150
Lars Peter Hansen, John C. Heaton, and Nan Li
tity of a firm’s capital from its value in financial markets. When there are positive but reasonable costs of adjusting the capital stock, one needs to make a modest adjustment, because the value of a firm will exceed the value of its capital if investment is positive. Thus, as HHL review in their paper, Hall can use the static first-order condition for investment from the model of Hayashi (1982) to infer the total quantity of capital, both tangible and intangible. But the static equation for investment is not the only first-order condition of Hayashi’s model; there is also a dynamic first-order condition, the Euler equation. In a stochastic, discrete-time formulation (unlike Hayashi’s deterministic, continuous-time model), that Euler equation can be written as (1)
FK (Kt1, Lt1) qt1 qt1 qt Et 1 rt1 t1. qt qt
In this equation, Et is the expectation of future variables conditional on time-t information; F is the firm’s production function; q is the marginal value of a unit of installed capital; r is a risk-free real interest rate; and is a risk premium appropriate for discounting the stochastic cash flow of the firm (where may be negative in the case of a “negative-beta” asset). Hall (2001) needs to assume Hayashi’s (1982) conditions to measure marginal q from asset prices, since in general asset prices measure only Tobin’s average q. But if one can measure marginal q from asset prices, then one can also estimate equation (1) using asset market data. For simplicity, suppose a firm that is 100 percent equity financed. Then the left-hand side of equation (1) is the expected return to holding equity—expected dividends plus expected capital gains, divided by the purchase price of capital to create a percentage return. Given a model that relates to observables (for example, the consumption CAPM), one can test whether the predictions of equation (1) hold. Thus, in a sense HHL are pointing out an overidentifying restriction of Hall’s intangible-capital model, which can be tested with asset market data. The major uncertainty regarding Hall’s paper has always been whether the asset values of firms in the late 1990s reflected large amounts of intangible capital—with the capital gains of those years interpreted as equally large investments in intangibles—or whether the high stock values and large capital gains were the classic symptoms of an unsustainable bubble (“irrational exuberance”) in equity markets. In principle, one can answer this question using aggregate time series data to construct the holding returns in equation (1). If the holding returns, over long periods of time, are in line with the historical norms for equity returns, then Hall’s rational model of valuation will be validated. If, on the other hand, the 1990s are followed by a long period of abnormally low returns, then the evidence will favor the mispricing model.
Intangible Risk
151
The problem with this strategy, of course, is that one would have to wait for at least ten years to conduct a test with enough data to settle this issue convincingly. Since such a delay is completely unacceptable for policy purposes, HHL propose an interesting alternative. They note that the defining characteristic of equity values in the late 1990s was a low ratio of book-tomarket values of stocks. (Hall, of course, interpreted this observation as indicating a high ratio of intangible to tangible capital, since only the latter are recorded on firms’ books.) HHL note further that one can observe similarly low book-market ratios in the historical cross section of equities. Thus, assuming that the features that led to a low book-market (B-M) ratio for the aggregate stock market in the 1990s are the same forces that lead some firms to have low B-M ratios in the cross-section distribution—a big assumption, but a reasonable hypothesis to investigate—we can use known facts about the subsequent returns of firms with low B-M values to shed some light on the intangible-capital versus mispricing debate. On a first look, the cross-sectional evidence strengthens the case for mispricing. Shleifer (2000, chap. 5) reviews a number of empirical papers in this area. All find that value stocks (ones with high B-M ratios) outperform growth or glamour stocks (those with low B-M ratios). However, HHL investigate a new possibility suggested by several recent empirical finance papers. These papers argue that stocks with low B-M ratios are actually less risky, and thus their lower expected returns can be justified by riskbased considerations. That is, in terms of equation (1), HHL note that a low ex post return is justified by a lower risk premium, . As the authors note, in the intangible-capital context this story requires that intangible capital be less risky than physical capital. However, my reading of the paper is that HHL’s attempt to investigate this hypothesis turns up little solid evidence in favor of the risk-based explanation for the low returns of growth stocks. The results are quite fragile, and there seem to be few findings that are robust to reasonable alternative empirical specifications. Since these recent papers on risk-based explanations for the low returns of growth stocks have excited considerable interest in the finance literature, this lack-of-robustness result alone makes HHL’s paper quite valuable. Furthermore, it is important to realize that the risk-based story requires that investors consciously accept lower expected returns ex ante, in exchange for the supposed reduction in risk. Thus, according to the riskbased story, surveys of investors in the late 1990s should show that investors expected lower returns going forward.1
1. The dinner-table conversation implied by the risk-based model is something like the following: “Honey, today I put all our retirement assets in new-economy, Nasdaq stocks. I know, I know—we would do much better investing in old-economy companies. But, by golly, this is safe as a house!” The spouse’s reaction is left to the imagination of the reader.
152
Lars Peter Hansen, John C. Heaton, and Nan Li
However, the evidence strongly contradicts this hypothesis. VissingJorgensen (2004) analyzes surveys of future returns that consumers expected, starting in 1998. Figure 1 in her paper contradicts the low-expectedreturn hypothesis at every point. Expected one-year returns were incredibly optimistic in 2000 and fell sharply in 2001–2. Of course, B-M ratios were high in 2000 and lower in 2001–2, so the relation of expected returns to B-M ratios is exactly the opposite of what is needed to rationalize the risk-based explanation. In fact, the evidence in Vissing-Jorgensen’s paper is quite consistent with a model where investors “chase returns” by naively extrapolating recent high returns into the future. This evidence does not bode well for Hall’s intangible-capital interpretation of the high stock values in the 1990s. In sum, the paper by Hansen, Heaton, and Li is a valuable contribution to empirical finance, because it shows that the recent challenge to a longstanding finance puzzle, the value stock premium, is itself subject to challenge. The econometric evidence in this paper is a good complement to the survey-based evidence I have just discussed. Together, the two suggest that the torrid stock market of the late 1990s reflected a classic bubble and not rational valuation of large quantities of intangible capital. But only time will tell for certain. References Hall, R. E. 2001. The stock market and capital accumulation. American Economic Review 91:1185–1202. Hayashi, F. H. 1982. Tobin’s marginal q and average q. Econometrica 50:213–24. Shleifer, A. 2000. Inefficient markets: An introduction to behavioral finance. New York: Oxford University Press. Vissing-Jorgensen, A. 2004. Perspectives on behavioral finance: Does “irrationality” disappear with wealth? Evidence from expectations and actions. In NBER macroeconomics annual 2003, ed. Mark Gertler and Kenneth Rogoff, 139–94.
5 The Relation among Human Capital, Productivity, and Market Value Building Up from Micro Evidence John M. Abowd, John Haltiwanger, Ron Jarmin, Julia Lane, Paul Lengermann, Kristin McCue, Kevin McKinney, and Kristin Sandusky
5.1 Introduction The measurement of intangibles and human capital, important both for goods-producing and for service-producing industries, has always been a difficult challenge for the statistical system. The growth of the “new econJohn M. Abowd is the Edmund Ezra Day Professor of Industrial and Labor Relations at Cornell University, director of the Cornell Institute for Social and Economic Research (CISER), distinguished senior research fellow at the Longitudinal Employer Household Dynamics (LEHD) program at the U.S. Census Bureau, research associate at the National Bureau of Economic Research (NBER), research affiliate at the Centre de Recherche en Économie et Statistique (CREST), research fellow at IZA (Institute for Labor Economics), and a member of the executive committee of the Conference on Research in Income and Wealth (CRIW). John Haltiwanger is professor of economics at the University of Maryland and a research associate of the NBER, a senior fellow at LEHD, and is on the executive committee of the CRIW. Ron Jarmin is Assistant Div. Chief for Research at the Center for Economic Studies (CES), U.S. Census Bureau, and is on the executive committee of CRIW. Julia Lane is program director at the National Science Foundation and senior research fellow of the LEHD program at the U.S. Census Bureau. Paul Lengermann is an economist in the Industrial Output Section of the Division of Research and Statistics at the Federal Reserve Board. Kristin McCue is an economist with CES, U.S. Census Bureau. Kevin McKinney is an economist in the LEHD program at the U.S. Census Bureau. Kristin Sandusky is an economist in the LEHD program at the U.S. Census Bureau. The authors wish to acknowledge the substantial contributions of the staff in the Longitudinal Employer-Household Dynamics (LEHD) Program at the U.S. Census Bureau. We would also like to thank the participants at a pre-conference for helpful comments. This research is a part of the U.S. Census Bureau’s LEHD Program, which is partially supported by National Science Foundation Grant SES-9978093 to Cornell University (Cornell Institute for Social and Economic Research), the National Institute on Aging, the U.S. Department of Labor (ETA), and the Alfred P. Sloan Foundation. Any opinions, findings, and conclusions or recommendations expressed in this publication are those of the authors and do not necessarily reflect the views of the U.S. Census Bureau or the National Science Foundation. Confidential data from the LEHD Program were used in this paper. The U.S. Census Bureau is preparing to support external researchers’ use of these data under a protocol to be released in the near future.
153
154
John M. Abowd et al.
omy” has made responding to this challenge even more urgent: the need to understand how such inputs affect the value chain of productivity, growth, and firm value now surpasses the need to measure the contribution of bricks, mortar, and equipment. Yet the changes that have brought the new economy into existence have also highlighted the need for improvements to traditional measures of inputs and outputs (Haltiwanger and Jarmin 2000), especially for human capital. Finding new measures of human capital that are both quantifiable and available for a sample large enough for use in official economic statistics is a formidable challenge. This paper uses newly available micro-level data from the U.S. Census Bureau on both employers and employees to demonstrate a new approach to addressing this challenge. We use new measures of human capital that directly capture the market valuation of the portable component of skill, including the contribution of “observable” and “unobservable” dimensions of skill. In principle, the measures go beyond indirect proxies (such as measures of years of formal education) to quantify the value of individual specific skills, such as innate ability, visual or spatial skills, nonalgorithmic reasoning, analytic or abstract decision making, and “people skills” (Bresnahan, Brynjolfsson, and Hitt 2002). We present exploratory empirical results that relate the new human capital measures to measures of labor productivity and market value. An additional challenge has been to document the sources of firm-level heterogeneity in productivity, growth, and value. One of the key findings of the literature using micro-level data is that large differences exist across many dimensions of firm inputs and outcomes. In particular, employers exhibit little uniformity in either the methods they use to hire and fire workers or in the selection of types of workers they employ. We therefore use measures of the dispersion of the firm-level human capital distribution to capture relevant aspects of firm-level differences in organizational capital and workplace practices. We begin by describing the background, motivation, and underlying specifications used in this chapter. Next, we describe the newly created data sources and measures that underlie our study. The subsequent section provides an exposition of the measurement of human capital that is made possible by the new census data. We then present exploratory empirical results that relate our new human capital measures to measures of firm performance including labor productivity and market value. The final section concludes the paper. 5.2 Background, Motivation, and Specifications The wide-ranging literature on human capital and intangibles is impossible to summarize here. We can, however, provide a brief background to provide some perspective on our approach. We begin with a discussion of
The Relation among Human Capital, Productivity, and Market Value
155
our methodology for measuring human capital and then consider the role of human capital at the firm level by focusing on its potential relation to productivity, market value, and tangible and intangible assets. 5.2.1 Human Capital: Conceptual and Measurement Issues The importance of human capital in accounting for observed differences in wages and productivity has a very long history in economics. Becker (1964) and many others helped the profession define the components of human capital, and the contribution of human capital to productivity has been intensively and exhaustively studied (e.g., Jorgenson, Gollup, and Fraumeni 1987—hereafter JGF). Clearly, we stand on the shoulders of these researchers, but our approach differs in key ways that depend critically on the availability of data. In particular, our conceptual and measurement approach depends not only on the availability of longitudinal, matched employer-employee data but also on the availability of universe files of all workers and firms. The starting point for our approach has been well documented and investigated in papers by Abowd, Kramarz, and Margolis (1999—hereafter AKM) and Abowd, Lengermann, and McKinney (2003—hereafter ALM). In this paper, we exploit newly developed measures of human capital that have emerged from this work and are part of a new program at the Census Bureau called the Longitudinal Employer-Household Dynamics (LEHD) program. These and related papers emphasize a point that has long been known in the study of human capital—direct measurements of human capital are very difficult. The standard approach is to take advantage of the “usual suspects”—for example, education and experience—and to build proxies for human capital using such measures. In the productivity literature referenced above, this approach has made extensive use of household data. Using primarily the Current Population Survey (CPS), JGF create detailed measures of human capital from person-level data in the United States by exploiting wage differences across groups defined by gender, experience, and education. They aggregate these measures by industry and at the total economy level. Their work demonstrates the existence of an enormous stock of human capital in the U.S. economy and shows that the stock and flows of this asset are vitally important for understanding changes in labor productivity. However, JGF (and subsequent related work, including the Jorgenson, Ho, and Stiroh chapter in this volume) recognize that this approach has limitations. Clearly, industry- and economywide aggregates fail to capture the firm-level variation that is a driving force in productivity growth. In addition, the existing data provide only a relatively small set of observable characteristics of workers, thereby creating measurement problems, and omit measures of unobservable skill and confounding firm effects. Using a college degree as a measure of human capital, for example, fails to capture
156
John M. Abowd et al.
differences in school quality and program of study (Aaronson and Sullivan 2001). The large portion of wage variation that cannot be explained by these variables highlights the important role of the unobserved component of skill in the measurement of human capital, as has been emphasized in the recent literature on rising wage inequality in the United States (see, e.g., Juhn, Murphy, and Pierce 1993). Finally, earnings measures include the returns to working with particular firms—for example, large, highly unionized, or profitable entities—and there is sorting among workers and firms. For these reasons, AKM note that the estimates of returns to measured and unobservable components of human capital may be biased. We will describe in detail the econometric and measurement approach in subsequent sections, but we will review here the basic specification used by AKM and ALM so that we can discuss the conceptual basis of our human capital measures. The core statistical model is (1)
wijt i xit J(i,t) εijt .
The dependent variable is the log wage rate of individual i working for employer j at time t, while the function J(i, t) indicates the employer of i at date t. The first component is a time-invariant person effect, the second the contribution of time-varying observable individual characteristics, the third the firm effect, and the fourth the statistical residual, orthogonal to all other effects in the model. In what follows, we use the person effect, , plus the experience component of x as the core measure of human capital, called “h” (i.e., hit it xit).1 We also exploit these components separately, as they clearly represent different dimensions of human capital or skill. For current purposes, our approach has three conceptual and measurement advantages over earlier approaches. First, because we have data on the universe of workers and of firms, we can create both firm-based and industry-based measures of human capital that include measures of dispersion as well as central tendencies. In particular, the new data permit the measurement of h and its underlying components for all workers. Further, because we can associate all of these workers with their employers, we can consider the full distribution of human capital for each firm and industry. Second, the measure of h includes a broader measure of skill—the market valuation of a number of observable and unobservable components—and, as such, encompasses various measures of skill, including education.2 Be1. See ALM for details of the estimation procedure. Additional controls in x include year effects interacted with gender effects and full-quarter employment adjustments (not all workers work full quarters). 2. In most of the analysis that follows, we do not separate the effect of observable characteristics, such as education, from those of unobservable components. For subsamples of our universe files, we can measure education, and some of the results (see, e.g., ALM) to which we refer are based upon such analysis. There is a large ongoing effort at the LEHD program to incorporate such observable characteristics on a more comprehensive basis including the development of robust imputation procedures for our universe files.
The Relation among Human Capital, Productivity, and Market Value
157
cause it includes the person effect, which can be thought of as the portable time-invariant component of a person’s wage, the measure of h also captures the influence of unobservable components of skill. Third, because the AKM approach controls for firm effects in estimating the person effects, our measure of human capital does not reflect personnel policies of the firm that may alter the returns to observable and unobservable dimensions of skill. Our approach has some limitations. First, the estimation method provides person effects and firm effects that are time invariant. Both theory and evidence suggests that firm effects and the returns to different dimensions of skill may be time varying. Permitting time variation in the person and firm effects (e.g., through estimating a mixed-effects or Bayesian model) is an active area of research in the LEHD program. For now, our interpretation is that we have a time average of the relevant effects. Second, the specification does not permit any interaction between the firm effects and the person effects. The implicit assumption in this specification is that firms pay the same premium (or discount) regardless of the type of worker. This, too, is an area for future work, but in this case we have some evidence to support our assumption. For example, Groshen (1991a,b) finds that establishment wage differentials exist across occupations within establishments. Groshen’s result has been updated and confirmed by Lane, Salmon, and Spletzer (2002), who use recent data from the Occupational Employment Statistics survey of the Bureau of Labor Statistics (BLS); they find that firms that pay premiums to their accountants also pay premiums to their janitors. Third, this specification does not permit any coworker effects. Yet Lengermann (2002b) has shown that whom you work with matters, as well as who you are and for whom you work. Despite these concerns, we regard this base specification as a significant advance over the standard approach of measuring human capital via observable education and experience, for the reasons discussed above. We explore many of these issues in what follows. For example, in section 5.3, we report the characteristics of our human capital measures and, along the way, compare our results to more traditional estimates of human capital. For now, we proceed to a discussion of how and why human capital might matter for productivity and market value. 5.2.2 Human Capital, Tangible Assets, Intangible Assets, and Productivity The relation between output and inputs is summarized by the standard production function approach. Explicit recognition of human capital and intangibles augments this function in the following specification of an intensive production function: (2)
yjt Fj (K Tjt , K Ijt , Hjt ),
158
John M. Abowd et al.
where yjt is output per worker for firm j at time t, K Tjt is tangible physical capital per worker, K Ijt is intangible capital per worker, and Hjt represents measures of the distribution of human capital of the workers at the firm.3 One of the conceptual issues that arises naturally in this setting is the possibility that various features of the within-firm distribution of human capital differentiate firms and are related to productivity. Consequently, studying the entire within-firm distribution of skill rather than only the mean is justified by a number of theoretical underpinnings. The relation between skill and productivity at the individual level might be nonlinear. For example, empirical results support the widely held view that, for the experience dimension of skill, the earnings-experience profile for an individual worker is strictly concave and thereby reflects an underlying productivity-experience profile that is likewise concave. Some other dimensions of skill may, however, yield a convex productivity-skill profile. For example, “superstar” managers (e.g., Bill Gates) may make a disproportionate contribution to the firm’s productivity and value. A related idea is that the relation between productivity and, say, innate ability at the individual level may be strictly convex. For our purposes, any nonlinear relation between the productivity of a worker and the worker’s skill implies that higher moments of the within-firm distribution of human capital will account for some variation in productivity at the firm level. For example, if the productivity-skill relation for, say, the dimension of experience, is strictly concave for the individual, then firms with greater dispersion on that dimension will have lower firm-level productivity, other things being equal. In contrast, if the productivity-skill relation for the individual worker is strictly convex, then firms with greater dispersion on that dimension of skill will have higher firm-level productivity. These examples suggest only a few of the reasons that higher moments of the within-firm distribution of human capital might be related to differences in productivity across firms. The models of Kremer and Maskin (1996) suggest that, in some production environments, interaction effects arise across the skills of coworkers in the production function. For example, if the coworker interaction effect reflects complementarities across skill groups at the firm, then a business with lower dispersion will be more productive. In the analysis that follows, we are not imposing enough structure to be able to identify which of these alternative and interesting factors may be the reason that the within-firm distribution of human capital matters for 3. One issue that we do not explore here is the relation of the firm effects to productivity and market value. Firm effects potentially capture many factors—rent sharing, firm personnel policies, efficiency wages, and the effect of collective bargaining—that may be positively or negatively related to productivity and market value. Identifying and exploring these different effects is a rich area for future work. For more discussion of this issue and related empirical work, see Abowd (1989, 1990).
The Relation among Human Capital, Productivity, and Market Value
159
differences in productivity across firms. Instead, we take an eclectic, exploratory approach and simply include various measures of the withinfirm distribution of human capital in our estimation of the relation between productivity, market value, and human capital. Besides the interesting issues about the role of the within-firm distribution of human capital, unmeasured (i.e., omitted) tangible assets and intangible assets may also complicate the empirical link between human capital and productivity. Any complementarities that may exist between unmeasured assets (or poorly measured assets) and human capital may be reflected in any estimated relation between productivity and human capital. Moreover, the nature of intangible assets may be very closely connected to how human capital is organized. 5.2.3 Defining and Measuring Intangibles A major issue confronting the productivity literature is the problem of studying the effects of intangible assets when those assets are not readily measurable. Thus, although some argue that intangibles are the “major drivers of corporate value and growth in most sectors” (Gu and Lev 2001), the definition of intangibles varies widely, from knowledge and intellectual assets (Gu and Lev); to human capital, intellectual property, brainpower, and heart (Gore 1997); to knowledge assets and innovation (Hall 1998); to organizational structure (Brynjolfsson, Hitt, and Yang 2002—hereafter BHY). Measures of these variables have been equally diverse and have included a residual approach, inference, and direct measurement. For example, although Gu and Lev (2001) conceptualize intangibles as knowledge assets (new discoveries, brands, or organizational designs), they derive their measure of intangibles as a residual; that is, as the driver of economic performance after accounting for the contribution of physical and financial assets. In empirical terms, they identify the core determinants of intangibles as research and development, advertising, information technology, and a variety of human resource practices. In a series of papers, Hall uses direct expenditures on research and development as well as patent information to proxy for knowledge assets. BHY use survey data on the “allocation of various types of decision making authority, the use of self-managing teams, and the breadth of job responsibilities” (p. 15) to construct a composite variable that acts as a proxy for organizational capital. The results from using these measures suggest that intangibles vary considerably across firms and sectors and that they are important in accounting for fluctuations in the market. Gu and Lev (2001), who use the broadest measure of intangibles, find that the level and growth rate of intangibles vary substantially across industries. In particular, they find the highest levels in insurance, drugs, and telecommunications; the lowest in trucking, wholesale trade, and consulting. However, they find the highest growth
160
John M. Abowd et al.
rates in consulting, machinery, and electronics industries; the lowest in retail trade, restaurants, and primary metals. Gu and Lev also find that intangibles-driven earnings (by two different measures) are much more highly correlated with stock market returns than are other measures, notably the growth of operating cash flow and the growth of earnings. BHY find that organizational structure has a large effect on market valuation: firms that score 1 standard deviation higher than the mean on this measure have approximately $500 million greater market value. Hall finds that, although research and development (R&D) accounts for a “reasonable fraction” of the variance of market value, the relation is not stable, and a great deal of the variation is unexplained. Patents matter, according to Hall, but less than research and development. Empirical studies also suggest that failing to include intangibles is likely to considerably bias estimates of the effects of tangibles on both market value and output. Gu and Lev (2001) find that expenditures on capital, R&D, and technology acquisitions are all highly correlated with intangible capital. Similarly, BHY find evidence for a strong correlation between organizational structure and investment in information technology. Although the connections are difficult to conceptualize and measure, organizational capital is closely linked to the way workers are organized and, in turn, to the apparently different human capital mixes across firms in the same industry. As emphasized in the preceding section, knowledge of the entire distribution of human capital within each firm allows us to quantify the relation between outcomes like productivity, market value, and the organization of human capital. 5.2.4 The Market Value of a Firm: Tangible Assets, Intangible Assets, and Human Capital The general approach to describing the market value of a firm, Vjt , in terms of its tangible and intangible assets is well summarized, derived, and motivated in BHY and can be written as (3)
Vjt V(K Tjt , K Ijt , . . .)
The market value of a firm is assumed to be an increasing function of the assets.4 Defining and measuring all of the terms in this relation are difficult, however. If the market is characterized by strong efficiency, then, as Bond and Cummins (2000) point out, the market value of a company will equal the replacement cost of its assets (in the absence of adjustment costs and market power). From this perspective, one way of measuring intangibles is a residual approach (see, e.g., Hall 2001), because the residuals will reflect 4. BHY specify a linear relation and emphasize the departure of coefficients from 1. Hall (1998) discusses an alternative log-linear relation that may be relevant. We use the log-linear specification in our analysis in part because our human capital measures are not on the same inherent scale and metric as the measures of assets and market value.
The Relation among Human Capital, Productivity, and Market Value
161
the difference between the market value and observed assets. Alternatively, direct measures of intangibles (e.g., organizational capital, as in BHY) can be included in an econometric specification explaining market value. However, such a specification potentially permits the coefficients on the various assets to reflect direct and indirect effects. One interpretation of the BHY coefficients is that they are due to complementarities with unmeasured intangibles. Thus, the coefficient on any measured asset will reflect the correlation between the measured and unmeasured assets. Thus, as BHY have found, the coefficient on information-technology capital in a linear specification of equation (3) is larger than 1. BHY provide evidence that this reflects the complementarity between market value and organizational capital. Should the preceding considerations lead one to include human capital in the set of variables in an econometric specification of the market value equation? In the absence of complementarities, a basic view of the role of human capital might suggest that it is not relevant for the market value of the firm. That is, if all human capital is general human capital, and if it is fully compensated by the market, and if no correlation exists between human capital and unmeasured tangible or intangible assets, then human capital will not be reflected in market value. Several factors may, however, require departures from these assumptions. First, human capital may not be fully compensated in the market. Second, the chosen mix of human capital may indeed be a key aspect of what is meant by “organizational capital.” As discussed above, many factors may yield a relation between productivity and higher moments of the within-firm distribution of human capital. Under this view, the average level of human capital at the firm may not matter for market value, but the way that human capital is organized (as measured by higher moments) may matter a great deal. Finally, human capital (its average and other measures of the distribution) may be complementary to unmeasured tangible and intangible assets in a sense analogous to the arguments and findings of BHY. As such, human capital may be positively related to market value because of measures of tangible or intangible assets that are omitted from the specification. 5.2.5 Econometric and Interpretation Issues The previous subsections provided an overview of our approach. We explore newly created measures of human capital from longitudinal employer-employee data. These measures, in principle, encompass traditional measures and moderate some of the econometric difficulties. In what follows, we explore the relation of these measures to productivity and market value at various levels of aggregation. The discussion above suggests that a host of econometric issues could arise to complicate the estimation of any productivity or market value equation. At the heart of these issues is the
162
John M. Abowd et al.
well-known problem that tangible assets and intangible assets, including those involving some measure of human capital, are endogenous. The observed measures for any given econometric implementation of equations (2) and (3) may also be proxies for other unobserved measures. Our specification assumes that firms have control over the level of human capital that they choose, and so an additional empirical concern is the degree to which firms are constrained in adjusting their workforce (as noted by Acemoglou 2001, among others). Work by Haltiwanger, Lane, and Spletzer (2001) notes the remarkable degree of persistence in firms’ choice of workforce. However, this overall picture of firm-level persistence suggests that these firm choices are quite deliberate. Firms have ample opportunity to change the workforce—a great deal of empirical evidence shows that firms churn workers through jobs at quite high rates and hence have abundant opportunities to change their worker mix (see, for example, Burgess, Lane, and Stevens 2000). In this paper, we focus on identifying economically and statistically significant relations rather than attempting to establish causality or to pin down direct versus indirect effects.5 We include measures of tangible and intangible assets in relatively simple specifications of productivity and market value equations. We recognize that our coefficient estimates reflect both direct and indirect effects of the assets we measure. In particular, the effect of human capital on productivity and market value may include both direct and indirect components. However, by looking at the effect on both productivity and market value, we hope to make some progress in understanding the role of human capital on business outcomes. If the measures of human capital are capturing mostly general human capital, for which the worker is fully compensated, and if such human capital measures are not correlated with unmeasured intangibles (or other unmeasured assets), then human capital is likely to have a positive effect on productivity (as AKM found in France) and very little effect on market value (because firms are fully paying for the human capital and thus generate no additional value from having higher human capital). However, if measures of human capital (or some components or indices of our human capital measures) are positively related to productivity and market value, then the measures are presumably capturing, either directly or indirectly, some form of intangible asset associated with human capital. We recognize the importance in this context of distinguishing the direct from the indirect effects of human capital. However, as noted above, our objective is to explore the links between our new measures of human capital and productivity and market value in a largely descriptive manner. We anticipate that identifying and quantifying the respective direct and indi5. In that sense, our approach is very much in the spirit of BHY.
The Relation among Human Capital, Productivity, and Market Value
163
rect roles of these new measures of human capital in this context will be the subject of much research in the years to come.6 One other related issue of interpretation warrants mention. We are exploring the relation between productivity and human capital measures on the one hand, and market value and human capital measures on the other. These two relations capture very different aspects of firm performance. First, as already noted, any productive input that is fully compensated in the market may be related to productivity but unrelated to market value. Second, productivity captures current activity, while market value reflects future profits and associated anticipated value. Thus, the factors that affect current activity may be very different from those affecting future profit streams. One might argue that some factors inherently lead to a negative correlation between market value and current productivity. For example, a business with a “new idea” carrying a high market value may be actively expanding and investing in physical and human capital. Adjustment costs may imply that such a firm exhibits low current productivity. 5.3 Data The key measures for this project are human capital, physical capital, productivity, and market value. The integrated employer-employee data allow us to construct firm-specific measures of human capital. The data from the Economic Censuses provide measures of output, employment, and other inputs to explore the relation between (labor) productivity and human capital measures. The Compustat data on publicly traded firms provide us with measures of output, employment, physical capital, and market value at the firm level. In terms of matching, we first match our employer-employee data to the Economic Census and other business-level data at census. We then match the Compustat data to the integrated employer-employee, Economic Census, and related data. Figure 5.1 provides a brief summary of the data resources used for this project, but of necessity it vastly reduces the number and complexity of the links involved in constructing the matched employer-employee data. The 6. Cummins (chap. 2 in this volume) takes one approach to separate out some of these effects (although not in the context of using measures of organizational or human capital). He uses instrumental variable techniques to isolate the contribution of measures of tangible assets by trying to find instruments that are correlated with the measured tangibles but uncorrelated with unmeasured intangibles. Thus, he attempts to identify the direct effect of the measured assets. Moreover, his approach in principle avoids another related problem of endogeneity arising from correlations of the asset variables with unmeasured productivity or market value shocks. The unmeasured, idiosyncratic productivity and market value shocks likely reflect, however, the idiosyncratic factors that we are seeking to understand. The pursuit of an estimation strategy for instruments that are supposedly orthogonal to these shocks may completely miss the role of intangibles.
164
Fig. 5.1
John M. Abowd et al.
Summary of data sources
details of these linkages can be found in the appendix. However, we provide summary information about the data and matching in the next two subsections. 5.3.1 The Integrated Employer-Employee Data We exploit new Census Bureau data that are part of the LEHD program. These data integrate information from state unemployment insurance records and economic and demographic data from the Census Bureau in a manner that permits the construction of longitudinal information on workforce composition at the firm level. The LEHD program permits direct linking of the Census Bureau’s demographic surveys (household-based instruments) with its economic censuses and surveys (business-based and business-unit-based surveys). The unemployment insurance (UI) wage records are discussed elsewhere (see Burgess, Lane, and Stevens 2000). The Employment Security Agency in each state collects quarterly employment and earnings information to manage its unemployment compensation program. These data enable us
The Relation among Human Capital, Productivity, and Market Value
165
to construct quarterly longitudinal information on employees. The advantages of the UI wage records are numerous. The data are frequent, longitudinal, and potentially universal. The sample size is generous, and reporting for many items is more accurate than that in survey-based data. The advantage of having a universe as opposed to a sample is that the movements of individuals among employers and the consequences for earnings can be tracked. In addition, longitudinal data can be constructed with the employer as the unit of analysis. The LEHD program houses data from nine states that currently comprise 45 percent of total U.S. employment: California, Florida, Illinois, Maryland, Minnesota, North Carolina, Oregon, Pennsylvania, and Texas. In this paper, we use data from seven of these states and provisional national weights to build our human capital measures. We currently have the crosswalk between the UI files and the establishment-level and firm-level data for six of the seven states. For this reason, we restrict our analysis of the relation between productivity, market value, and human capital to data from this six-state subset.7 Perhaps the main drawback of the UI wage data is the lack of even the most basic demographic information on workers (Burgess, Lane, and Stevens 2000). Links to Census Bureau data can overcome this problem in two ways: first, the individual can be integrated with administrative data at the Census Bureau containing information such as date of birth, place of birth, and gender for almost all the workers in the data. Second, as discussed in the previous section, LEHD program staff members have exploited the longitudinal and universal nature of the dataset to estimate jointly fixed worker and firm effects using the methodology described in detail in ALM and in Abowd, Creecy, and Kramarz (2002). 5.3.2 The Economic Censuses and Related Business-Level Data The Economic Censuses (conducted every five years) provide comprehensive data on basic measures such as output, employment, and payroll for all of the establishments in the United States. In addition, in certain sectors (such as manufacturing) the census asks more-detailed questions on other inputs (such as capital). Our first goal is to create a matched data set linking the human capital measures to the Economic Census data. For the present work, we focus on the Economic Censuses in 1997. One issue that immediately arises is the appropriate and feasible level of aggregation of business activity. Although the Economic Censuses are conducted at the establishment level, the business-level identifiers on our human capital measures are at the federal Employer Identification Number (EIN), two-digit Standard Industrial Classi7. In order to meet the requirements of the data use agreement between census and the individual states the identity of states used in a particular analysis is normally not released.
166
John M. Abowd et al.
fication (SIC) code, and state level.8 Therefore, we aggregate the Economic Census data to the level of the business-level identifiers so as to match them to the human capital files.9 Our unit of observation is somewhere between an establishment and a firm, but most of the observations in our analysis data are at the establishment level. For firms with multiple establishments reporting under a single EIN in a single state, we aggregate the establishment data to the two-digit SIC (SIC2), state level. In what follows, we begin our analysis of the relation between human capital and productivity using this “quasi-establishment-level” data. For this analysis, we have roughly 340,000 business units that we can match to the Economic Censuses (in a universe of roughly 430,000 business units at this level of aggregation from the UI files for the six states used in this analysis). Most of the UI businesses that we cannot match to the Economic Censuses are beyond the scope of the Economic Censuses (for example, agricultural businesses). We are also able to accomplish essentially the same thing in non-Census years using the Census Business Register, previously known as the Standard Statistical Establishment List (SSEL). Although the information in the Business Register is limited, it does identify the ownership structure of firms so that we can further aggregate to the enterprise or firm level. The latter aggregation permits us to match enterprise-level data on human capital to Compustat. Because we are working with only six states, we are limited in our ability to examine evidence for large companies that operate in multiple states. We use a threshold rule (for example, 50 percent of employment in the company must be in these six states) to restrict attention to companies for which we can measure human capital and firm outcomes such as market value and productivity in a comparable fashion. In what follows, we first aggregate our human capital estimates up to the firm level for all firms in our six states (using the 50 percent rule as noted). The resulting sample contains roughly 300,000 firms. We use this sample to investigate the relation between human capital and productivity at the enterprise or firm level. We then restrict our attention to Compustat firms, which reduces the sample substantially because only about 13,000 Compustat firms exist nationwide. For this restricted sample, we again investigate the relations between human capital and productivity and also investigate the relation between human capital and market value.
8. The identifiers in the LEHD program’s human capital data provide additional geographic and industry information, but they are not coded down to the workplace (establishment) level. Ongoing research is attempting to refine the most disaggregated economic entity available in these data. 9. We use the two-digit SIC number as the industry measure in this work because, although industries are coded at the four-digit level in the Economic Census and in the underlying establishment data on the UI side, we have not yet implemented an algorithm to use additional industrial or geographic detail in the definition of the establishment.
The Relation among Human Capital, Productivity, and Market Value
167
5.4 Human Capital Estimates The results of the human capital estimation are based on data for seven states for the years 1986–2000 and use the specification in equation (1). Although the methodology and estimates we use are discussed in detail in ALM, we provide here a brief summary of some of the features of the human capital estimates before relating these new estimates to productivity and market value. Some basic features of the estimates for the seven-state data set are shown in table 5.1. First, the contribution of worker and firm effects to worker earnings are roughly equal. Second, the R2 of this earnings regression (1 – .4022) is approximately .84, a great deal higher than that for regressions based simply on worker characteristics. Third, ALM augment the analysis by decomposing the person effect into the part attributable to time-constant observable characteristics such as gender and education and the part attributable to unobservable characteristics. The fourth and fifth rows of the table illustrate the results of this decomposition (using the notation from AKM). The unobserved component of the person effect is much more important and more highly correlated with wages than the observed component. Fourth, the different components of human capital (that is, the person effect and the experience component) exhibit different variation and covariation. Indeed, an interesting feature of the person effect and experience component is that they are negatively correlated. This result is not surprising (because, for example, younger generations of workers are more highly Table 5.1
Summary of pooled human capital wage components Correlation with:
Component Log real annualized wage rate (ln w) Time-varying personal characteristics (x) Person effect () Unobserved part of person effect () Non-time-varying personal characteristics (u) Firm effect () Residual (ε)
Standard deviation
ln w
x
u
ε
0.881
1.000
0.224
0.468
0.451
0.212
0.484
0.402
0.691 0.835
0.224 0.468
1.000 –0.553
–0.553 1.000
–0.575 0.961
–0.099 0.275
0.095 0.080
0.000 0.000
0.802
0.451
–0.575
0.961
1.000
0.000
0.045
0.000
0.229 0.362 0.354
0.212 0.484 0.402
–0.099 0.095 0.000
0.275 0.080 0.000
0.000 0.045 0.000
1.000 0.101 0.000
0.101 1.000 0.000
0.000 0.000 1.000
Sources: Authors’ calculations using the Longitudinal Employer-Household Dynamics (LEHD) Program databases. Notes: Based on 287,241,891 annual observations from 1986–2000 for 68,329,212 persons and 3,662,974 firms in seven states as described in the text. No single state contributed observations for all years.
168
John M. Abowd et al.
educated), but it reminds us that skill consists of several dimensions and that they need to be taken into account. Finally, one surprising aspect of this comprehensive decomposition of the wages is that the correlation between the person effect and the firm effect is virtually zero at the observation (person-year) level. We do not seek to explain this somewhat surprising finding, but we note that various aggregations of the person and firm effects yield a strong and positive relation. For example, ALM show that at the industry level, person and firm effects are positively related. Interestingly, Abowd and others (2001) show that at the firm level, person and firm effects are positively related after controlling for output, local wage effects, and broad industry. These results by industry and at the firm level are quite relevant here since they suggest systematic sorting of workers across different firms and industries. In the next subsections, we first provide some summary information about how these new measures compare with the JGF-like measures of human capital. We also describe the differences in the two components of the ALM measure of human capital—experience and person effects—and how they vary across workers. Finally, we examine the degree to which the human capital measures vary across firms and industries. 5.4.1 A Comparison of New and Traditional Measures of Human Capital In principle, the JGF methodology can be applied equally well to the measurement of both sectoral and aggregate labor quality, but in practice, the LEHD approach permits more heterogeneity within and across industries. Lengermann (2002a) has developed sectoral aggregates of human capital following the JGF approach and compared them to LEHD estimates. Briefly, the JGF approach incorporates data from the Censuses of Population, the Current Population Survey (CPS), and the national income and product accounts (NIPA). JGF base their labor quality indexes on totals of labor inputs cross-classified by sex, age, educational attainment, employment class, and industry. We summarize the results of two different types of comparison here. The first “direct” approach compares the JGF indexes to sectoral labor quality derived from industry averages of our human capital measure for the period 1995–98. JGF formally define labor quality as the ratio of the total volume of labor to hours worked, where volume is measured by a constant-quality index of labor quantity. The LEHD measure of industryaverage human capital follows essentially the same logic, where the measure of labor volume is also based on a constant-quality human capital measure and where total employment substitutes for total hours worked. Neither approach is completely satisfactory. The LEHD data cannot measure hours worked. The JGF constant-quality index of labor quality confounds firm heterogeneity with person heterogeneity.
The Relation among Human Capital, Productivity, and Market Value
169
We compare the growth rates in the human capital indexes over the period 1995–98 using the LEHD-based and JGF approaches. The withinindustry growth rates are highly correlated—the employment-weighted average of the sectoral correlations is 0.79. However, average growth for any given industry is much higher, and cross-industry variation in those growth rates is much greater, in the LEHD measures than in the JGF measures: the average growth rate for the LEHD measure over the four years is 0.04, with a cross-industry standard deviation of 0.067, while the corresponding values for the JGF measures are 0.014 and 0.001. In what follows, we exploit cross-sectional variation (across firms) in their human capital, whereas the JGF procedure focuses on generating growth rates of human capital by industry. As such, the JGF measures are not well suited to examining within-year, cross-industry variation. Thus, as a second “indirect” approach we approximate the JGF labor quality indexes with indexes derived from predicted industry average wages obtained by regressing wages on age, education, and sex using the CPS. For this purpose, we use the same cells used by JGF. We show that the time series growth rates of these indirect measures are highly correlated with the actual JGF measures (the employment-weighted average correlation is 0.73). Thus, the CPS-based approach does a reasonable job of approximating the more sophisticated JGF measures. We compare the cross-industry variation in the CPS-based measures with the same variation using the LEHD measures for the year 1998. The two measures are, in principle, comparable because both rely on regression approaches that attempt to isolate the component of wages due to individual characteristics. However, because LEHD data permit individual contributions to wages to be distinguished from firm contributions, one might not expect them to yield identical results. Workers sort nonrandomly into firms based on their own characteristics—both observable and unobservable— and the characteristics of firms. Furthermore, firm wage premiums—the firm effects in the wage regression (1)—are not distributed uniformly across industries. These two facts imply that a strong, positive correlation exists between person and firm heterogeneity at the industry level (ALM)—a correlation that the JGF cell-based analysis cannot disentangle. We plot the industry-level aggregates for the CPS-based approach against the industry-level aggregates for the most inclusive measure of skill from the LEHD approach (figure 5.2). Although the levels are normalized differently, a great deal of correlation clearly exists between the two measures—indeed, the correlation is 0.76. However, somewhat more crossindustry variation exists in the LEHD-based measure than in the CPSbased measure (the standard deviation of the former is 0.15, and that of the latter is 0.13). In summary, the LEHD-based measures by industry are closely related to those derived by JGF or by a simpler but closely related CPS-based pro-
170
John M. Abowd et al.
Fig. 5.2 Comparison of CPS and LEHD measures of human capital at the industry level
cedure. However, LEHD-based measures generate greater average growth and more cross-sectional variation both in growth rates across industries and in levels of human capital across industries within a year. 5.4.2 The Construction of New Human Capital Measures A major contribution of the LEHD approach is the richness of the new measures of human capital, and these are fully discussed in ALM. Here, we explore some of the key features of the new measures, particularly aggregated to the firm level. For this purpose, we use three worker and firm traits to build measures of the human capital resources available to firms: the person effect (), the overall labor market experience of each worker captured by the experience component of x (denoted x in this section, though it excludes the additional controls described in note 1), and the sum of these two components (overall human capital, or h). We describe the distribution of these measures in figure 5.3. All three components of the distribution obviously exhibit enormous variation across workers. The shapes of the distributions of the alternative measures differ: The distribution of the person effect is bell shaped and has thick tails and high variance; the distribution of experience is less smooth; and the distribution of human capital (the sum of and x) is roughly bell shaped and centered about zero and has much less mass at the tails than either experience or . Underlying these relations is the negative correlation between experience and person effects reported in table 5.1.
The Relation among Human Capital, Productivity, and Market Value
Fig. 5.3
171
Distribution of human capital across workers
5.4.3 The Construction of Firm-Level Measures Although the different worker-level measures of human capital provide a useful context, the focus of this paper is on developing firm-level measures of human capital and relating them to firm outcomes. The firm-level measures that we use are those developed in ALM. They are based on kernel-density estimates of the within-firm distribution of human capital. The details of the estimation of the kernel densities are provided in ALM. Some restrictions on the sample that are necessitated by this approach are discussed in detail in the appendix. One key aspect of the variation across firms is driven by large variation in the distribution of human capital across industries (table 5.2). Finance, insurance, and real estate (FIRE) and manufacturing are both highhuman-capital industries. However, the components of human capital vary across these industries: the FIRE industries have high human capital especially because of having workers with high person effects, while the high human capital of the manufacturing industries arises more through their having workers with high experience effects. Human capital varies substantially across industries, but the variation across firms within a given industry is enormous. We computed for each firm the share of workers that are in the lowest quartile of the economywide distribution of human capital to obtain the distribution of those shares across firms (figure 5.4, panel A). We conducted the same exercise for the share of workers at each firm that are in the upper quartile of the distribution (figure 5.4, panel B).
172
John M. Abowd et al.
Table 5.2
Summary statistics on human capital, by industry (1997)
Industry (SIC division)
Total employment
Proportion of workers above the overall median of h
Proportion of workers above the overall median of
Proportion of workers above the overall median of x
Agriculture Construction FIRE Manufacturing Mining PubAdmin Retail Services TCE Wholesale
304,134 1,366,022 1,382,730 3,365,954 194,678 811,215 3,537,787 7,856,442 1,374,002 1,626,221
0.338 0.510 0.531 0.539 0.511 0.558 0.383 0.493 0.562 0.567
0.407 0.465 0.591 0.473 0.387 0.451 0.542 0.520 0.495 0.529
0.502 0.556 0.439 0.560 0.646 0.584 0.388 0.468 0.558 0.540
Notes: The sample is 1997 job-level unemployment insurance (UI) data from 6 states. Includes all jobs held by workers imputed to be full time at the end of the first quarter 1997.
Figure 5.4 shows substantial differences both within and across industries. As is consistent with the data in table 5.2, many manufacturing and FIRE firms have low shares of low-skill workers (lowest quartile), while retail trade has many firms with high shares of low-skill workers. However, firms within an industry obviously vary enormously in their shares of highand low-skill workers. Apparently, different firms in the same industry choose very different mixes of human capital; in the analysis that follows, we will investigate whether this heterogeneity in human capital is related to heterogeneity in productivity and market value. 5.5 The Relation between Productivity and Human Capital at the Micro Level In this section, we explore the relation between our rich measures of establishment-level human capital and establishment- and firm-level productivity, controlling, as much as possible, for other relevant factors (capital intensity, for example).10 For this purpose, we focus on the 1997 Economic Census. Our measure of labor productivity is revenue per worker, the standard measure used in official BLS productivity statistics for gross output per worker.11 10. Recall that the level of aggregation that we use to approximate the establishment is that of an EIN/SIC2/state cell. This unit is somewhere between the establishment and the firm, but most firms operate only a single establishment. 11. The official estimates include adjustments for changes in inventories in inventoryholding sectors. However, studies by Foster, Haltiwanger, and Krizan (2001) show that, in manufacturing, the correlation between labor productivity measured as shipments per worker and labor productivity measured as shipments adjusted for inventory changes is extremely high (almost 1).
A
B
Fig. 5.4 Between-firm distributions of human capital, by industry: A, proportion of workers at firm in bottom quartile of h; B, proportion of workers at firm in top quartile of h
174
John M. Abowd et al.
An important goal is to determine which measures of the within-firm distribution of human capital are relevant for understanding outcomes such as productivity and market value. From a traditional viewpoint, we want to control for a measure of the central tendency of the within-firm distribution of human capital. However, from the perspective of considering nonlinearities and other factors related to the organization of human capital at a business, we also want to explore additional measures of the within-firm distribution of human capital. Our approach is necessarily exploratory since neither theory nor earlier empirical research provides much practical guidance. Accordingly, we explore the role of the following measures: (a) the fraction of workers at the business whose human capital is above the economywide median, (b) the fraction above the 75th percentile, (c) the fraction below the 25th percentile, and (d) the interaction between—more specifically, the product of—measures b and c. We consider these four measures using the overall human capital measure h and also the separate components of human capital (the person effect, , and the experience component, x).12 Moreover, we consider a range of specifications, some parsimonious (with only a small number of summary human capital measures) as well as richer specifications with a number of measures of the distribution included. Table 5.3 presents the means and standard deviations of our human capital and labor productivity measures for our overall sample and for the manufacturing businesses. For the latter we can also measure capital intensity. The statistics reported in the table are based on the employmentweighted distribution. In section 4, we discussed many of the features of the human capital distribution across businesses. However, a few additional points are worth making here (table 5.3). First, businesses exhibit tremendous heterogeneity in their mix of human capital, as evidenced by the very large standard deviations in the human capital measures. Second, manufacturing apparently has higher labor productivity and workers with higher human capital (on both the person effect and experience dimensions) than other sectors. We made an exploratory analysis of the relation between log labor productivity and our h measure of the distribution of human capital (table 5.4) and an analysis using the components of h separately (table 5.5). In all cases, the results are based upon employment-weighted regressions. All analyses included two-digit fixed industry effects, which are highly significant. Moreover, the explanatory power of each set of regressions is uni12. Note that the economywide thresholds are based on the universe of all workers in the six states (not just workers employed at the businesses we match to the Economic Censuses) for which we have developed human capital estimates. We also generated versions of these measures on the basis of thresholds relative to a specific industry rather than the wider economy. The results were not sensitive to this distinction.
The Relation among Human Capital, Productivity, and Market Value Table 5.3
175
Mean values of variables in log productivity regressions All sectors
Manufacturing
Variable
Mean
SD
Mean
SD
Log labor productivity Log capital intensity
4.731
1.140
5.130 4.230
0.888 1.201
0.215 0.176 0.172
0.538 0.267 0.208
0.225 0.185 0.169
0.025
0.035
0.022
0.180 0.130 0.156
0.473 0.203 0.239
0.181 0.110 0.173
0.027
0.039
0.026
0.157 0.116 0.144
0.545 0.285 0.205
0.133 0.115 0.096
0.018
0.050
0.016
Overall h x Fraction of employment above 50th percentile 0.480 Fraction of employment above 75th percentile 0.237 Fraction of employment below 25th percentile 0.266 Interaction: fraction above 75th percentile with fraction below 25th percentile 0.042 Person effect () Fraction of employment above 50th percentile 0.519 Fraction of employment above 75th percentile 0.264 Fraction of employment below 25th percentile 0.230 Interaction: fraction above 75th percentile with fraction below 25th percentile 0.048 Experience component (x) Fraction of employment above 50th percentile 0.455 Fraction of employment above 75th percentile 0.220 Fraction of employment below 25th percentile 0.282 Interaction: fraction above 75th percentile with fraction below 25th percentile 0.049 No. of observations
337,495
39,638
Notes: The sample is 1997 data from 6 state UI-based firms (defined at the EIN 2-digit SIC level) matched to Economic Census and Annual Survey of Manufactures data. SD standard deviation.
formly high; this result suggests that measures of human capital are, either directly or indirectly, important sources of cross-sectional differences in productivity. The fact that the explanatory power for the manufacturing sector regressions is substantially less than that for all sectors is consistent with the notion that human capital is more important for the service sector than manufacturing—and more important for the “new” economy than the “old” economy. Businesses with a greater fraction of workers above the economywide median human capital level are much more productive (tables 5.4 and 5.5, column [A]). For the overall human capital measure, a change in this fraction of 1 standard deviation is associated with a change of 27 log points in labor productivity (table 5.4). For the person-effect measure (table 5.5), a change of 1 standard deviation in the fraction of high human capital workers is associated with a change of 25 log points in labor productivity. For the experience component, a change of 1 standard deviation in the fraction of high human capital workers is associated with a change of 23 log points
Table 5.4
The relation between labor productivity and the complete human capital measure (analysis level: establishments; dependent variable: log labor productivity) All sectors
Explanatory variable
(A)
Fraction of workers above 50th percentile of human capital Fraction of workers above 75th percentile of human capital Fraction of workers below 25th percentile of human capital Interaction of above 75th and below 25th percentiles Log capital intensity
1.264 (0.007)
No. of observations R2
337,495 0.555
(B)
–0.017 (0.010) –1.875 (0.012)
Manufacturing only (C)
(D)
(E)
1.064 (0.017)
0.512 (0.017)
0.268 (0.012) –1.673 (0.012) –3.032 (0.063)
(F)
(G)
0.430 (0.030) –1.143 (0.033) –6.563 (0.197)
0.173 (0.027) –0.658 (0.031) –3.870 (0.187) 0.285 (0.003)
39,638 0.353
33,926 0.483
0.302 (0.004) 337,495 0.569
337,495 0.572
39,638 0.325
33,926 0.471
Notes: The human capital measure is h x. The estimation sample is UI-based establishments (defined at the EIN/2-digit SIC level) for six states matched to the 1997 Economic Census and Annual Survey of Manufactures data. Standard errors in parentheses. Other controls include 2-digit industry effects. Results are based on employment-weighted regressions.
Table 5.5
The relation between labor productivity and human capital, decomposed (analysis level: establishments; dependent variable: log labor productivity) All sectors
Explanatory variable
(A)
Fraction of workers above 50th percentile for Fraction of workers above 75th percentile for Fraction of workers below 25th percentile for Interaction of above 75th and below 25th percentiles for Fraction of workers above 50th percentile for x Fraction of workers above 75th percentile for x Fraction of workers below 25th percentile for x Interaction of above 75th and below 25th percentiles for x Log capital intensity
1.400 (0.009)
No. of observations R2
337,495 0.547
Note: See table 5.4 notes.
(B)
1.990 (0.015) –0.450 (0.010)
Manufacturing only (C)
(D)
(E)
1.240 (0.024)
0.670 (0.022)
1.700 (0.017) –0.920 (0.017) 2.830 (0.078)
1.490 (0.120)
1.410 (0.032) 0.200 (0.020) –1.900 (0.017)
(F)
(G)
2.060 (0.050) –0.710 (0.043) 3.110 (0.246)
1.480 (0.048) –0.299 (0.040) 1.941 (0.230)
0.540 (0.062) –1.610 (0.081) –7.020 (0.328)
0.135 (0.058) –0.713 (0.077) –5.175 (0.308) 0.298 (0.003)
39,638 0.360
33,926 0.496
0.450 (0.031)
0.760 (0.022) –1.560 (0.018) –4.800 (0.090) 0.310 (0.003)
337,495 0.564
337,495 0.568
39,638 0.315
33,926 0.471
The Relation among Human Capital, Productivity, and Market Value
177
in labor productivity (table 5.5). These effects are large, yet they reflect only a fraction of the standard deviation in measured labor productivity across businesses (114 log points). We also consider alternative measures of the distribution of human capital—focusing on the fraction of workers with high human capital and those with low human capital (tables 5.4 and 5.5, columns [B] and [C]). Here the results are somewhat more complicated to interpret, but in all of the results a rightward shift in the distribution is still associated with an increase in productivity. That is, if the share of workers in the lower quartile is decreased a certain amount and the share of workers in the upper quartile is increased the same amount, then productivity increases; this result holds for the overall h measure and for each of the components of the human capital measures. However, asymmetric effects arise from changes in the upper tail and lower tail, and the results are also sensitive to inclusion of an interaction effect. Moreover, the nature of the asymmetries differs across components of human capital. The results for the overall h measure show that changing the share of workers in the firm that are in the lower tail of the human capital distribution has a disproportionate effect (table 5.4, column [B]). Somewhat surprisingly, the coefficient on the upper tail is negative but small in absolute terms and relative to the coefficient on the lower quartile, and it is not significant. The analogous column in table 5.5 (column [B]) sheds further light on these results and shows that different components of human capital act in different ways. In particular, a disproportionate change arises from the upper tail of the person effect and from the lower tail of the experience effect. We have also devised an even richer specification in which we attempt to capture the interaction between high-skill and low-skill workers (tables 5.4 and 5.5, column [C]). In this specification, we find that the linear terms have the expected signs: holding other things constant, including the interaction effect, businesses with more workers in the top quartile of the human capital distribution and fewer workers in the lowest such quartile are more productive. However, the interaction effects are an important part of the effects of interest. For overall h and the experience effects, we find that the interaction effect is negative, whereas for the person effects we find that the interaction effect is positive. Putting the linear and interaction effects together reinforces the asymmetries we have already noted. That is, for the person effects, we obtain a disproportionately large change from an increase in the upper tail of the distribution, and the positive interaction effect reinforces this asymmetry. This result can be seen by noting that the combined linear and interaction effect for the person effect evaluated at the mean for the upper quartile is 2.45, and the combined linear and interaction effect for the lower quartile is –0.167. The magnitude of the implied variation in productivity is very
178
John M. Abowd et al.
asymmetric as well. The combined effect implies that an increase of 1 standard deviation in the share of workers in the highest quartile yields a change of 32 log points in productivity while an increase of 1 standard deviation in the share of workers in the lowest quartile yields a loss of 3 log points in productivity. The opposite pattern holds for the experience effects. That is, the interaction effects reinforce the disproportionate change produced by the lower tail of the distribution of the experience effect. As discussed in section 5.2, the asymmetries in the effects of human capital on productivity can be explained in a variety of ways. We cannot distinguish between competing explanations, but our findings are consistent with the view that, at the worker level, the relation between productivity and experience is concave, and the relation between productivity and the person effect is convex. However, the results may also reflect complementarities across coworkers that differ on different dimensions of skill. Columns [D] through [G] of tables 5.4 and 5.5 show results for the manufacturing sector. Column [D] replicates column [A] but only for manufacturing. In column [E], capital intensity is an additional measure. For the most parsimonious specification, the results for manufacturing are quite similar to those for the overall economy when we do not control for capital intensity. Controlling for capital intensity does not change the qualitative nature of the results, and although it reduces the magnitudes of the effects substantially, they nonetheless remain very large. This aspect of the findings suggests that human capital is complementary to physical capital. Thus, as we discussed in section 5.2, we need to recognize that our measures of human capital are capturing both direct and indirect effects (where the latter stem in part from unobserved factors such as tangible and intangible assets). Columns F and G of tables 5.4 and 5.5 present results for manufacturing using the richer specification used in column [C] for all sectors—with and without a control for capital intensity. Again, the results are quite similar to those for all sectors without capital intensity. Once again, adding capital intensity reduces the magnitudes of most of the effects from the human capital measures. For manufacturing as a whole, capital-skill complementarity is clearly present. Indeed, capital-skill complementarity seems to exist for all of the dimensions of skill we are investigating. That is, the inclusion of capital intensity reduces the person effect, the interaction effects, and the effect of the experience component. How sensitive are these results to the level of aggregation? We address this issue by aggregating establishment-level data from the 1997 Economic Censuses to the firm level and estimating a set of similar regressions (tables 5.6 and 5.7). The qualitative results are very similar, and the most parsimonious specifications yield magnitudes that are quite similar to the “establishment-level” results in tables 5.4 and 5.5. However, for the more
Table 5.6
The relation between labor productivity and the complete human capital measure (analysis level: firms; dependent variable: log labor productivity) All sectors
Explanatory variable
(A)
Fraction of workers above 50th percentile of human capital Fraction of workers above 75th percentile of human capital Fraction of workers below 25th percentile of human capital Interaction of above 75th and below 25th percentiles Log capital intensity
1.309 (0.007)
No. of observations R2
303,219 0.537
(B)
0.11 (0.011) –1.664 (0.011)
Manufacturing only (C)
(D)
(E)
0.690 (0.016)
0.433 (0.015)
0.184 (0.013) –1.621 (0.011) –0.648 (0.058)
(F)
(G)
–0.194 (0.031) –1.133 (0.028) –0.889 (0.166)
–0.189 (0.029) –0.786 (0.027) –0.274 (0.154) 0.248 (0.003)
34,900 0.310
34,294 0.416
0.256 –(0.003) 303,219 0.551
303,219 0.551
34,900 0.292
34,294 0.408
Notes: The human capital measure is h x. The estimation sample is Business Register-based firms (defined as those with at least 50% of U.S. employment in the analysis states) for six states matched to the 1997 Economic Census and Annual Survey of Manufactures data. Standard errors in parentheses. Other controls include 2-digit SIC industry effects for the firm’s primary industry and indicators for multi-location status and whether the firm had establishments in 1, 2, or 3 2-digit SIC categories. Results are based on employment-weighted regressions.
Table 5.7
The relation between labor productivity and human capital, decomposed (analysis level: firms; dependent variable: log labor productivity) All sectors
Explanatory variable Fraction of workers above 50th percentile for Fraction of workers above 75th percentile for Fraction of workers below 25th percentile for Interaction of above 75th and below 25th percentiles for Fraction of workers above 50th percentile for x Fraction of workers above 75th percentile for x Fraction of workers below 25th percentile for x Interaction of above 75th and below 25th percentiles for x Log capital intensity No. of observations R2 Note: See table 5.6 notes.
(A)
(B)
Manufacturing only (C)
1.562 (0.009) 1.783 (0.015) –0.729 (0.011)
(D)
(E)
0.923 (0.023)
0.614 (0.021)
1.557 (0.017) –1.070 (0.016) 2.173 (0.073)
1.483 (0.011)
0.604 (0.030) 0.307 (0.019) –1.718 (0.015)
(F)
(G)
1.190 (0.050) –0.625 (0.038) 2.522 (0.209)
1.000 (0.046) –0.385 (0.036) 2.199 (0.193)
–0.113 (0.058) –1.207 (0.067) –2.348 (0.268)
–0.262 (0.053) –0.871 (0.062) –1.942 (0.247) 0.257 (0.003)
34,900 0.310
34,294 0.427
0.224 (0.028)
0.430 (0.022) –1.711 (0.017) –0.569 (0.084) 0.262 (0.003)
303,219 0.530
303,219 0.547
303,219 0.549
34,900 0.288
34,294 0.410
180
John M. Abowd et al.
complex specifications, the magnitudes vary somewhat from the establishment-level results, especially when interaction effects are included. The differences in results are most apparent for the manufacturing sector when we include interaction effects for the overall h case and for the results on the experience component. Even in these cases, the overall patterns are quite similar: the lower tail of the human capital distribution for both h and experience at both the establishment level and the firm level produces disproportionate effects. The differences that arise between the establishment-level and firm-level results may be due to increased measurement error of both the productivity and the human capital measures at the firm level. Measuring productivity is more difficult at the firm level than at the establishment level, especially for large, complex firms with many establishments that cross industry boundaries. In a like manner, the human capital measures are more complex at the firm level than at the establishment level because firms with many establishments may differ in their distributions of human capital across establishments. The latter situation is interesting in its own right, and we plan to explore it in future work. The results demonstrate powerfully that understanding differences in labor productivity across businesses—particularly outside of manufacturing—involves understanding differences in human capital across businesses. The close relationship between labor productivity and human capital is clearly evidenced by the very large R2 in the regressions, and the result obtains regardless of whether these are direct or indirect effects and regardless of endogeneity issues. The results also clearly suggest that what matters is not simply a measure of the central tendency of the human capital distribution. The fraction of workers at the tails of the distribution matters, suggesting that the dispersion of human capital matters. Perhaps the most intriguing aspect of our results is the finding that the different components of human capital matter in different ways: the most productive firms are those that have a high fraction of workers in the top quartile of the person-effect distribution and a low fraction of workers in the lowest quartile of the experience-effect distribution. These findings strongly suggest that the organization and mix of the workforce matter substantially. 5.6 Investigating the Relation between Market Value and Human Capital While we have several alternative samples and levels of aggregation at which to investigate the relation between productivity and human capital, market value is measured only at the firm level and only for publicly traded firms. Therefore, we are constrained to using the relatively small matched Compustat sample (a detailed discussion of the matched Compustat sample and variable definitions are in the appendix). We report the means and standard deviations of this subset of observations for 1996–98 (table
The Relation among Human Capital, Productivity, and Market Value Table 5.8
181
Mean values of variables in market value regressions All sectors
Variable
Mean
SD
Employment Log market value Log capital Log other assets Multi-location indicator
2,540 4.844 3.167 3.814 0.780
9,106 2.008 2.175 2.086
0.545 0.291 0.212
0.189 0.163 0.140
0.045
0.210
0.560 0.312 0.207
0.179 0.148 0.138
0.049
0.023
0.460 0.224 0.264
0.147 0.111 0.126
0.048
0.015
Overall h x Fraction of employment above 50th percentile Fraction of employment above 75th percentile Fraction of employment below 25th percentile Interaction: fraction above 75th percentile with fraction below 25th percentile Person effect () Fraction of employment above 50th percentile Fraction of employment above 75th percentile Fraction of employment below 25th percentile Interaction: fraction above 75th percentile with fraction below 25th percentile Experience component (x) Fraction of employment above 50th percentile Fraction of employment above 75th percentile Fraction of employment below 25th percentile Interaction: fraction above 75th percentile with fraction below 25th percentile No. of observations
1,837
Note: Sample is pooled 1995–98 data for Business Register-based firms, defined as those with at least 50 percent of U.S. employment in the six analysis states matched to Economic Census and Compustat data. SD standard deviation.
5.8).13 Clearly these firms are more human capital intensive than the full sample—the proportion of the workforce above the median economywide threshold of skill (all measures) is greater, as is the proportion above the 75th percentile. The proportion below the 25th percentile, by contrast, is smaller. However, all measures still exhibit substantial heterogeneity: although the mean of each variable is different in the two samples, the standard deviations are very similar. Tables 5.9 and 5.10 present the results of estimating equation (3), the 13. Because we use a log specification, we eliminate firms with missing or zero values. This procedure creates an even smaller sample than the Compustat matched sample described in the appendix. The excluded firms tend to have a greater fraction of skilled workers, with greater representation in the upper tail of both the person effect and experience components, and on average, the level of tenure of the workers in the excluded firms is above average.
182
John M. Abowd et al.
Table 5.9
The relation between market value and the complete human capital measure (analysis level: firm; dependent variable: log labor productivity) All sectors
Explanatory variable Fraction of workers above 50th percentile of human capital
(A)
(B)
(C)
0.579 (0.269) –0.414 (0.313)
0.732 (0.181)
Fraction of workers above 75th percentile of human capital
0.421 (0.026) 0.539 (0.026)
0.422 (0.026) 0.538 (0.026)
0.554 (0.319) –0.451 (0.385) 0.343 (1.785) 0.422 (0.026) 0.538 (0.026)
1,837 0.851
1,837 0.851
1,837 0.851
Fraction of workers below 25th percentile of human capital Interaction of above 75th and below 25th percentiles Log capital Log other assets No. of observations R2
Notes: The human capital measure is h x. Data were pooled for the years 1995–1998. The analysis sample is firms in six states (defined as the firms with at least 50 percent of U.S. employment in the analysis states) matched to Economic Census and Compustat data. All regressions include year effects, 2-digit SIC effects for the firm’s primary industry, and indicators for multi-location status and whether the firm had establishments in 1, 2, or 3 2-digit SIC categories. Standard errors in parentheses.
(log) market value regressions, using our two sets of human capital measures.14 In all specifications, we find a strong, positive relation between (log) market value and physical and other assets that is consistent with the theory and the empirical literature.15 The value added by our analysis is that we can also measure human capital at the firm level. In our simplest specification (table 5.9, column [A]), a larger fraction of employees in the upper half of the human capital distribution is associated with significantly greater market value. This result is of interest because, if the compensation of highly skilled workers is proportionate to their skill, and no correlation exists between unmeasured (tangible or intangible) assets and human capital, market value should be unaffected once these other variables have been controlled. Yet the estimated effect of human capital is quite large: an increase of 1 standard deviation in the proportion of the workforce that is above average is associated with a change of approximately 14 log points in market value (set against a standard deviation of market value that is quite large—200 log points). 14. The reported results are based upon pooled data for 1995–98. 15. For this log-linear specification, the coefficients on a particular asset (e.g., log of physical capital) should reflect the share of that asset in the total.
The Relation among Human Capital, Productivity, and Market Value Table 5.10
183
The relation between market value and human capital, decomposed (analysis level: firm; dependent variable: log labor productivity) All sectors
Explanatory variable Fraction of workers above 50th percentile for
(A)
(B)
(C)
0.867 (0.370) –0.455 (0.392) 5.163
0.610 (0.385) –1.249 (0.543)
1.038 (0.242)
Fraction of workers above 75th percentile for Fraction of workers below 25th percentile for Interaction of above 75th and below 25th percentiles for
(2.061) Fraction of workers above 50th percentile for x
0.001 (0.292)
Fraction of workers above 75th percentile for x
–0.964 (0.467) –0.682 (0.422)
0.424 (0.026) 0.529 (0.026)
0.428 (0.026) 0.524 (0.026)
–0.327 (0.520) –0.251 (0.491) –4.776 (2.501) 0.428 (0.026) 0.521 (0.026)
1,837 0.854
1,837 0.855
1,837 0.857
Fraction of workers below 25th percentile for x Interaction of above 75th and below 25th percentiles for x Log capital Log other assets No. of observations R2 Note: See table 5.9 notes.
When we estimate more-complex specifications, we find that the effects of different parts of the distribution of h on market value have some asymmetries that follow patterns similar to those found for productivity. That is, we find that the upper tail of the distribution of h has a disproportionately positive effect on market value and that the interaction effect of the lower and upper tails is positive. The positive interaction effect reinforces the asymmetry. The decomposition of these results into person and experience effects is even more striking, however. In particular, all of the positive effect on market value is due to workers who have higher (person effects; table 5.10, column [A]). A striking result is that, although both high- and highly experienced workers are more productive (table 5.5), only the person effect is positively related to market value. When we examine the more detailed specifications using the upper and lower tails of the distributions of human capital, the results again show that high-person-effect firms are higher-market-value firms; but, again, asymmetries appear in the effects of different parts of the distribution. Column
184
John M. Abowd et al.
(B) of table 5.10 shows that the upper quartile of the person effect has a disproportionate impact on market value. Column (C) shows that the interaction effect is positive. The disproportionate impact of the upper quartile found in column (B) is also present in column (C), which can be seen by combining the linear and interaction effects (evaluated at means). An increase in the upper quartile of the person effect yields a combined effect (linear plus interaction) of 1.374, while an increase in the lower quartile of the person effect yields a combined effect of –0.537. We find it striking that the person effect is important in predicting market value. Recall that the person effect is the component that includes “unobservable” components of skill. Thus, one interpretation of these results is that value creation is highest for firms that do a better job of attracting and retaining workers with dimensions of skill that are difficult to observe. We also find it striking that the upper tail of the person effect matters disproportionately. Those firms that have the highest share of the “best and the brightest” workers (as measured by workers in the top quartile of the person effect) are those with the highest market value. Again, more research is necessary to determine whether these findings are due to the correlations between high-skill workers and unmeasured assets (that is, omitted variables). 5.7 Summary and Concluding Remarks Finding and quantifying new measures of human capital that could be introduced into a firm-level production function is an important challenge for the federal statistical system, particularly given the advent of the new economy. This paper uses universe micro-level data on both employers and employees to create new measures that begin to address this challenge. Our new measures are closely related to those that have been developed in the existing literature, but the integration of the employer and employee data permits a much richer measurement approach to human capital. We document the substantial consistency between our new measures and earlier measures pioneered by JGF (and subsequent closely related work). But we also extend the previous work in a variety of ways. First, our human capital measures encompass the traditional measures but also include effects not captured in traditional measures. Second, our integrated data permit these human capital measures to vary within and between firms in the same way that other inputs and outcomes can vary. Third, we examine different aspects of human capital—pure skill, experience, and a summary measure—and find marked differences in their distributions. We also use the richness of the data, especially the firm-level distribution of human capital, to capture relevant aspects of firm-level differences in organizational capital and workplace practices.
The Relation among Human Capital, Productivity, and Market Value
185
In the present analysis, we examine the relations between human capital, productivity, and market value. Not surprisingly, for measures of human capital and productivity, we find strong positive links in the microdata that differ according to the component of human capital used. The most-skilled workers (in terms of the estimated person effects) have a disproportionate positive impact on productivity, and the least-skilled workers (in terms of the estimated experience effects) have a disproportionate negative impact on productivity. We find that human capital is also related to market value even after controlling for total physical assets. Interestingly, the component of skill with factors that are “unobservable” (at least to the econometrician) is most closely related to market value. At this stage of our analysis, we are unable to distinguish the observable from the unobservable components of skill. Future work might fruitfully explore this aspect of the analysis and results. The new, micro-based measures of human capital incorporate unobservable dimensions of each worker’s skill. Our use of these measures to examine their role in accounting for variation across firms is clearly exploratory. The strong empirical relations that we have uncovered may reflect a variety of direct and indirect effects of human capital.
Appendix This section describes the construction of the data and key variables used for analysis. Many of the steps involved in the formation of the data are part of the efforts at the LEHD project and the Center for Economic Studies (CES) to develop data infrastructure; these organizations have detailed the steps in a number of technical documents. We provide here an overview of the data construction and refer to technical documents as needed. The process of constructing the data can be broken into two large segments. The first segment is the formation of an “establishment-level” file used in the productivity analysis; the file contains human capital measures constructed from the LEHD database and business traits from Economic Census microdata. The second segment is the formation of a firm-level data set that uses an aggregated version of the establishment file matched to Compustat data for the years 1995–98. Building the Establishment-Level File The process of building our approximation of an establishment-level file can be summarized as follows: we use the LEHD database to estimate human capital measures for each worker, and we use these measures along with a common economywide set of thresholds to generate variables
186
John M. Abowd et al.
characterizing the distribution of human capital at each business unit. Next, using data from the 1997 Economic Census microdata, we add in measures of productivity for businesses in all sectors as well as capital intensity data for manufacturing businesses. The LEHD database links worker identifiers and demographic detail to employer identifier variables. The construction of the LEHD database is described in detail in LEHD Program (2002). We begin by using the database to compute human capital measures for each worker using methods from AKM that are applied to the LEHD database in the manner described in ALM. The time span of the data used to estimate the person fixed effect varies across states; this paper uses human capital estimates drawn from a seven-state sample and uses matches of these estimates to establishment-level and firm-level data for six states for the years 1995–98. Using the worker-specific human capital measures from ALM, we create variables summarizing the distribution of human capital at each business unit. We define a business unit by Employer Identification Number (EIN), two-digit Standard Industrial Classification (SIC) code, state, and year. This level of aggregation was selected because it is the smallest that is common to both the LEHD database and the Economic Census microdata. At this level of aggregation, most units are establishments, and thus we refer to this as an “establishment” file even though a multi-unit firm with multiple establishments in the same two-digit SIC state cell will have data aggregated across those establishments. Although the LEHD data contain identifiers for all individuals employed by each business during a given year, not all workers contribute to our characterization of the human capital distribution at the establishment in that year. Rather, we identify workers who are imputed to have worked full time at any job during the year and include from that group only those who were working at the firm at the end of the first quarter of the year.16 We restrict the set of jobs to those held by workers imputed to have worked full time because we do not observe information on hours worked, yet we wish to separate human capital effects from labor utilization effects. The timing restriction provides an approximation of employment at a point in time and corresponds closely to point-in-time employment as measured in the Economic Census, which collects employment for the pay period that includes March 12. To evaluate the consequences of the restrictions on employment, we present summary statistics for alternative samples (table 5A.1). We further show descriptive statistics for variables created from the establishmentlevel data set that includes the full set of possible jobs—that is, all jobs held by workers imputed to have worked full time at any job in that year (col16. A worker employed at the end of the first quarter is characterized as having worked for that employer in both the first and second quarters.
The Relation among Human Capital, Productivity, and Market Value Table 5A.1
187
Summary statistics for alternative samples
All full-time jobs (A)
All full-time jobs at end of first quarter (B)
All full-time jobs at end of first quarter with 5 workers (C)
1,391,839 40,967,355
1,078,877 21,829,227
429,529 20,156,292
Fraction of firms in: Agriculture Mining Construction Manufacturing Public administration FIRE Retail trade Services TCE Wholesale trade
0.020 0.008 0.072 0.125 0.026 0.057 0.191 0.380 0.057 0.064
0.014 0.009 0.063 0.154 0.037 0.063 0.162 0.360 0.063 0.074
0.012 0.009 0.060 0.162 0.039 0.062 0.161 0.356 0.065 0.073
Mean sales per employee Mean capital intensity
4.506 4.124
4.709 4.215
4.731 4.229
No. of establishments No. of jobs
Notes: Statistics are employment weighted. Analysis samples are based on 1997 unemployment insurance (UI)-based establishments for six states (defined as Employer Identification Number [EIN]/2-digit Standard Industrial Classification [SIC] units).
umn [A]). We also report the same set of measures with the added pointin-time restriction (column [B]). When the point-in-time restriction is imposed, the number of jobs falls nearly one-half (from almost 41 million jobs to about 22 million jobs), but the number of establishments declined at a far lower rate, from about 1.4 million to approximately 1.1 million. Establishments that are smaller and have higher worker turnover are at higher risk of being eliminated by this restriction. A comparison of the numbers across columns (A) and (B) suggests, however, that the two data sets vary only slightly from each other in terms of industry composition. Specifically, column (B) has a slightly higher share of manufacturing establishments (13 percent in column [A]; 15 percent in column [B]), a slightly lower share of retail (19 percent vs. 16 percent), and a slightly lower share of services (38 percent vs. 36 percent). Businesses eliminated through the pointin-time restriction appear to be less productive (lower sales per employee) and, among manufacturers, less capital intensive, although these differences are relatively small. Many business units at this level of aggregation employ a small number of workers. Our ability to characterize the distribution of human capital at small businesses (here, those with fewer than five full-time employees) is limited because one worker can make an enormous difference. Moreover,
188
John M. Abowd et al.
even for small to medium-sized businesses, the empirical distribution of human capital at a firm is quite noisy and potentially misleading. For example, the firm may have some positive probability of having workers in a particular percentile range but in fact have no workers in that range at a point in time. For these practical as well as conceptual reasons, we generate measures of the within-firm distributions based upon the kerneldensity estimate (for h and for each of the components) for each firm (see ALM for details). The use of this method involves trade-offs, however. The method allows us to measure the human capital at businesses more accurately, but the data requirements for the estimation procedure prevent us from using the smallest of firms in our analysis. Specifically, we generate a kernel-density estimate for each of the three human capital measures at each business that has at least five full-time workers at the end of the first quarter in a given year. The size restriction eliminates less than 10 percent of jobs (about 1.6 million out of about 22 million; table 5A.1, columns [B] and [C]).17 However, this restriction causes the number of business units to decline more than 60 percent (from about 1.1 million to less than 430,000). Summary statistics suggest, however, that although the firms lost through this restriction are small, they are spread evenly throughout all sectors of the economy and do not differ substantially in mean level of productivity or capital intensity. We use a common set of thresholds to construct the establishment-level skill measures for the pooled distribution of all jobs held in all states currently in the LEHD database by workers imputed to have worked full time at the end of the first quarter of 1997. These thresholds are the median, the 75th percentile, and the 25th percentile value of each of the three human capital measures. We then calculate the cumulative density at each business between each of the thresholds to generate the proportions above the economywide median and economywide 75th percentile of h and the proportion below the economywide 25th percentile of h. We construct similar measures for and x. The mean value of each human capital measure is sensitive to the set of sample restrictions imposed as well as to the method used to compute business-level skill (table 5A.2). Columns (A) through (C) correspond to columns (A) through (C) in table 5A.1 in terms of the restrictions imposed on the data. To generate the mean human capital measures reported in each of these three columns, we use the same set of thresholds described above along with the empirical distribution at each business unit rather than the smoothed estimate. Column (D) presents summary statistics derived from 17. Additional restrictions are imposed on the range of h, , and x values included in these measures. If a worker has a value for h that is below 6 or above 14 or a value of that is below –2 or above 2, that worker is excluded from the computation of the kernel-density estimate at the business. This restriction removes only the extreme outliers and results in a minor loss of both jobs and firms.
0.490 0.248 0.260 0.049
0.421 0.205 0.335 0.052
0.046
0.038
0.039
0.051
0.254
0.339
0.511 0.256 0.245
0.246
0.187
0.507 0.254 0.253
0.496
0.403
1,078,877 21,829,227
0.049
0.481 0.236 0.266
0.046
0.520 0.258 0.232
0.038
0.242
0.245
0.502
429,529 20,156,292
All full-time jobs at end of first quarter with 5 workers (C)
0.049
0.473 0.229 0.266
0.049
0.513 0.261 0.236
0.043
0.257
0.248
0.497
429,529 20,156,292
All full-time jobs at end of first quarter with 5 workers, using KDE (D)
Notes: Statistics are employment weighted. Analysis samples are based on 1997 UI-based establishments for six states (defined as EIN/2-digit SIC units). Columns (A), (B), and (C) are based on the empirical establishment-level distributions. Column (D) was computed using kernel-density estimates (KDE).
Fraction of workers above 50th percentile for x Fraction of workers above 75th percentile for x Fraction of workers below 25th percentile for x Interaction of above 75th and below 25th percentiles for x
Fraction of workers above 50th percentile for Fraction of workers above 75th percentile for Fraction of workers below 25th percentile for Interaction of above 75th and below 25th percentiles for
Fraction of workers above 50th percentile for human capital Fraction of workers above 75th percentile for human capital Fraction of workers below 25th percentile for human capital Interaction of above 75th and below 25th percentiles for human capital
1,391,839 40,967,355
All full-time jobs (A)
All full-time jobs at end of first quarter (B)
Summary statistics on human capital distribution across alternative samples
No. of establishments No. of jobs
Table 5A.2
190
John M. Abowd et al.
the smoothed kernel-density estimate of the distribution at each business. Thus, differing methods are used on the same set of workers and firms to generate the means in columns (C) and (D). In this way, we are able to isolate the effect of each sample restriction as well as the effect of using kernel-density estimates of the distribution of human capital at each business. A disproportionate number of those workers eliminated through the point-in-time restriction are coming from the bottom quartile of the overall experience distribution, and businesses that are cut have a higher share of these low-experience workers (table 5A.2, columns [A] and [B]). Recalling that many of the eliminated establishments are retail or service-sector businesses, this fall in the share of low-experience workers is consistent with the change in industry composition. Both the full and restricted data sets appear to have a similar mean share of low- and high- workers. Thus, the effect of the restriction appears to be primarily on the experience component of human capital. In spite of the large number of establishments eliminated through the employer-size restriction, the smaller establishments that are eliminated apparently do not employ workers of systematically different skill levels (column [C]). Finally, columns (C) and (D) show mean human capital measures estimated by the two different methods using the same group of workers and firms. The similarities across columns suggest that use of the “smoothed” kernel-density skill distribution at each business does not notably change the average of the human capital distributions across businesses. Tables 5A.1 and 5A.2 show that the between-business distribution of skill shares is only slightly sensitive to the point-in-time employment restriction; and it is not at all sensitive to the size restriction nor to the method used to characterize the skill distribution at each business. Table 5A.3 presents two sets of productivity regression results for EIN, two-digit SIC, state units in all sectors in 1997. The results in the left panel, obtained by using the human capital measures from table 5A.2, column (D) are identical to the results in table 5.5, columns (A) through (C). The right panel shows results obtained when we use the full data set of all jobs held by workers employed full time at any business in 1997 (the group described in column [A] in tables 5A.1 and 5A.2) and the empirical distribution of human capital to compute fractions of workers at different percentiles for each firm. Overall, the key productivity finding, that skill is positively related to productivity, appears to hold up across restrictions imposed on the data as well as the methods used to build the human capital measures, with one exception. Relative to the full sample, the interaction of the highest and lowest quartiles of either the or x distribution at a business has a much stronger effect on productivity in the restricted sample using the smoothed distributions. This difference may arise in part because we have removed many of the smaller firms that are more likely to suffer from measurement
The Relation among Human Capital, Productivity, and Market Value Table 5A.3
191
Comparison of the full data set and the kernel density estimate (KDE) subset for the relation between labor productivity and human capital, decomposed (analysis level: establishments in all sectors; dependent variable: log labor productivity) KDE data set
Explanatory variable
(A)
Fraction of workers above 50th percentile for Fraction of workers above 75th percentile for Fraction of workers below 25th percentile for Interaction of above 75th and below 25th percentiles for
1.400 (0.009)
Fraction of workers above 50th percentile for x Fraction of workers above 75th percentile for x Fraction of workers below 25th percentile for x Interaction of above 75th and below 25th percentiles for x
1.490 (0.120)
(B)
Full data set (C)
(D)
(E)
(F)
1.723 (0.008) –0.805 (0.006)
1.656 (0.009) –0.891 (0.008) 0.602 (0.037)
0.388 (0.009) –1.484 (0.007)
0.488 (0.010) –1.441 (0.008) –0.713 (0.039)
1.514 (0.005) 1.990 (0.015) –0.450 (0.010)
1.700 (0.017) –0.920 (0.017) 2.830 (0.078) 1.360 (0.006)
0.200 (0.020) –1.900 (0.017)
0.760 (0.022) –1.560 (0.018) –4.800 (0.090)
Notes: The human capital measure is h x. The estimation sample is UI-based establishments (defined at the EIN/2-digit SIC level) for six states matched to the 1997 Economic Census and Annual Survey of Manufactures data. Standard errors in parentheses. Other controls include 2-digit industry effects. Results are based on employment-weighted regressions.
error (particularly for the tails of the distribution) in their human capital measures. Two steps remain in the construction of the establishment-level files used for analysis. In each of the four years, we match the human capital measures described above to the Business Register (formerly the SSEL) to obtain information on business structure. This information is then used to build the firm-level data used in the market value analysis. In general, we find approximately 99 percent of the EINs from the LEHD database in the Business Register. The last step involved in building the establishment-level file is to aggregate 1997 Economic Census data on labor productivity and capital intensity to the EIN, two-digit SIC, state level and link these aggregates to the human capital data for 1997. Calculating Labor Productivity and Capital Intensity We obtain establishment-level data from the 1997 Economic Census microdata. To form labor productivity for each EIN, two-digit SIC, state unit, we first sum employment on March 12 across all subunits with nonmissing sales revenue and positive employment. We then divide sales revenue at each subunit by this sum. Last, we calculate an employment-weighted
192
John M. Abowd et al.
average of the subunits to aggregate to the EIN, SIC2, state level. We use a similar procedure for capital intensity. Our objective is to maximize the number of observations in the human capital file for which we are able to obtain, from business data, some measure of labor productivity and, for manufacturers, capital intensity. In the majority of cases, we are able to link the two files by EIN, SIC2, and state. We are also able to incorporate business information at this same level of aggregation. However, some records in the human capital file and the business data file match by EIN and state but do not match by both EIN and SIC2. Rather than discarding these records, we instead apply EIN-level statewide measures to each of the EIN, SIC2, state observations in our matched file. We link to 354,549 units (274,043 EIN-SIC2-state matches and 80,506 EIN-state matches). Of these matches, we are able to construct a labor productivity measure for 337,495 units and a capital intensity measure for 33,926 manufacturers. The key variables constructed from the economic censuses are defined more formally below. Log labor productivity: log of sales revenue per worker employed on March 12 at each EIN, two-digit SIC, state unit or each EIN state unit. Log capital intensity: log of the capital stock per worker employed on March 12. The capital stock is measured as the book value of capital in the Census of Manufactures. Two-digit SIC: modal two-digit SIC code of all reporting units under a state EIN (employment-weighted). For the firm-level productivity analysis, we aggregate all of the variables (labor productivity, capital intensity, and the human capital variables) to the firm (enterprise level) using employment weights. The Economic Census files contain firm or enterprise identifiers that make this aggregation relatively straightforward. As noted in the main text, we retain only those firms who have 50 percent or more of their employment in the six states that are used for this analysis. Also, as noted in tables 5.6 and 5.7, we include controls for multi-unit status and diversification indicators to reveal whether the firm operates in more than one industry. Building the Compustat-Matched File The Compustat database has two types of cases, those still being traded at the time the data are released and those that are no longer traded but were at some point since 1981. The Compustat documentation refers to “Active” and “Research” cases, but we refer to them as active and inactive cases here. Two types of matching procedures were used to identify links between the Compustat data and the Census Bureau’s Business Register. When possible, we used exact matching of EINs to identify the link. When that approach did not succeed (often because no EIN was available from
The Relation among Human Capital, Productivity, and Market Value
193
Compustat), we used business names, addresses, and industry codes to do probabilistic record linking. Each establishment on the Business Register has an EIN associated with its payroll tax filings, while most stock issues in the Compustat database have an EIN from SEC filings. We carry out the EIN matching by first extracting a list of each unique combination of EIN and firm identifier from the Business Register and then matching each such combination to the unique EINs in the Compustat database. In some cases, a single EIN is associated with more than one Business Register firm or Compustat stock issue. Also, in a few cases, more than one EIN from the Compustat database is associated with the same firm on the Business Register. In some cases of duplication, we had a clear reason to think that one link should be preferred, and we dropped the other links.18 Otherwise, we eliminated all the records involved. If an inactive Compustat case linked to several Business Register firm identifiers in several years, we used the Business Register identifier in the year closest to that in which the case became inactive as the link. If we did not find an exact EIN match, we tried probabilistic record linkage using information on name, address, industry (SIC code), and EIN from the two databases. Stock issues from businesses that are based overseas account for a large portion of the cases for which we tried statistical linkage because they often do not have an EIN in the Compustat data. Overseas and inactive cases also generally do not have complete address information, so the statistical linkage is based on name, state, and industry for a large fraction of these cases. In this paper, we use the statistical links only for the active cases (plus EIN matches for both active and inactive cases) because of concern about the quality of links for the inactive cases.19 18. For example, some of the nonunique matches on the Business Register involve one business that appears to be active and one that appears to be inactive. To match as many cases as possible, we did not eliminate inactive establishments before matching but did so afterward if the match was not unique. Some of the nonunique Compustat matches involve cases that have a Compustat-assigned Committee on Uniform Security Identification Procedure (CUSIP), which often carries alternative versions of data for companies that also have a standard CUSIP. For example, if an acquisition took place in 1999, the data we have might contain two records for the acquiring company: one with a standard CUSIP that has data reflecting the company’s holdings in each year, and another with a Compustat-assigned CUSIP that has consolidated data for the two businesses for some years before the merger. In this case, we would drop the record with the Compustat-assigned CUSIP (and consolidated data) and keep the record with the standard CUSIP. 19. Identifying the appropriate link was more complicated for the inactive cases both because of a lack of detailed address information and because it was not clear at the outset which years we should use for statistical linking. Because the statistical linking was quite time intensive, we did not try to match all cases to all available years of the Business Register but rather tried first with earlier years and then worked forward if a match was not found. After having identified links using that approach, we compared years in which the identified firm was active on the Business Register to the years with nonmissing data from Compustat. In a significant fraction of cases, the years did not line up, and that problem was the main reason we decided not to use the 346 statistically linked inactive cases.
194 Table 5A.4
Compustat status Active Inactive Total
John M. Abowd et al. Summary statistics on Compustat file for alternative samples Name/ address/ SIC link
Total (March 2001 Compustat database)
Deleted from Compustat before 1996
Deleted due to duplication
No SSEL link
Exact EIN link
Had EIN
No EIN
9,885 11,151 21,036
0 6,722 6,722
569 300 869
1,457 818 2,275
6,905 3,311 10,216
532 0 532
422 0 422
Note: SSEL Census Business Register, previously known as the Standard Statistical Establishment List.
Of the 14,312 Compustat cases that were traded at some point after 1995, we found a unique link for 11,170 cases (table 5A.4). However, some of these cases were linked to Business Register firms that were inactive in the years of interest or were missing essential Compustat data. We restricted the sample to cases that link to at least one establishment on the Business Register between 1996 and 1998 (the years for which we have LEHD estimates) and that have a price reported in Compustat in at least one year in that range; the result was 9,917 cases. The sample for use in conjunction with our human capital estimates is limited to businesses that have some employment in the six states for which we have LEHD estimates. We calculated sample sizes for this six-state subsample as a function of the share of employment we require in those states (table 5A.5). Table 5A.6 gives evidence on how the linking process affects the composition of the sample. The first column gives means for the full Compustat sample using pooled data from 1995–98 period for all Compustat cases that have nonmissing data on sales and positive data on employment. The second column takes the subset of those cases that were uniquely linked to a Business Register firm, and the third column further restricts the sample to those with at least 50 percent of Business Register employment in the six LEHD states.20 All three of these samples consist of firms that are very large simply because of the restriction to publicly traded firms in Compustat. The matched samples have somewhat smaller firms on average than the full Compustat sample. Very large, complex firms may be more likely to be
20. The sample means in table 5.8 are based on a sample with nonmissing data for a larger set of variables and an additional link to the LEHD data. The final regression sample is smaller than that in table 5A.3 and has slightly smaller Compustat employment on average (table 5.8).
The Relation among Human Capital, Productivity, and Market Value Table 5A.5
195
Number of firms in Compustat/Standard Statistical Establishment List database with employment in sample states Required employment share in LEHD states
Year
Any
50%
90%
1995 1996 1997 1998
3,529 3,578 3,851 3,903
1,254 1,283 1,397 1,370
828 840 902 871
725 757 800 793
In at least 1 year
4,841
1,942
1,294
1,170
All
Note: LEHD Longitudinal Employer-Household Dynamics Program.
Table 5A.6
Sample means for full and matched samples
No. of firms Market value Compustat employment SSEL employment Sales, net Compustat labor productivity (sales/employment)
Full Compustat
Matched Compustat/SSEL
Matched, with at least 50% employment in LEHD states
32,613 $2,149 million 6,281 $1,320 million
22,911 $1,631 million 5,068 3,497 $1,028 million
4,367 $1,036 2,925 1,878 $684 million
$363,000
$316,000
$358,000
dropped in linking the two databases because of problems with apparent duplication or because multinationals may not have an EIN in the database. Very large companies are also more likely to have employment spread across many states. Thus, they are less likely to be included in the third column of table 5A.6. Differences exist across the two data sources for employment, in part, probably, because of the inclusion of overseas employees in the Compustat figures (the Business Register data include only U.S. employees). However, detailed examination of the microdata suggests that differences exist even for firms that operate only in the United States, and the sources of those discrepancies are not clear. Possible candidates include differences in the definition of employee or differences in the dating of employment (the Business Register data reflect employment as of the week containing March 12, whereas the Compustat employment numbers do not refer to a particular date). For the Compustat variables, we follow the measurement methodology in the literature (e.g., Hall 1990, 1998; and BHY). We use the following primary Compustat variables.
196
John M. Abowd et al.
Market Value: value of common stock at the end of the fiscal year plus preferred stock value plus total debt. In Compustat mnemonics, it is MKVALFPSTKDT.21 Physical Capital: gross book value of capital stock is deflated by the gross domestic product (GDP) implicit price deflator for fixed investment. The deflator is applied at the calculated average age of the capital stock on the basis of the three-year average of the ratio of total accumulated depreciation to current depreciation. Other Assets: total assets minus the book value of physical capital. This item includes receivables, inventories, cash, and other accounting assets such as goodwill reported by companies.
References Aaronson, Daniel, and Daniel Sullivan. 2001. Growth in worker quality. Economic Perspectives 25 (4): 53–74. Abowd, John M. 1989. The effect of wage bargains on the stock market value of the firm. American Economic Review 79 (4): 774–800. ———. 1990. Does performance-based managerial compensation affect corporate performance. Industrial and Labor Relations Review 43 (3): 52S–73S. Abowd, John M., Robert H. Creecy, and Francis Kramarz. 2002. Computing person and firm effects using linked longitudinal employer-employee data. Longitudinal Employer-Household Dynamics Technical Paper no. TP-2002-06. Washington, DC: U.S. Census Bureau. Abowd, John M., John Haltiwanger, Julia Lane, and Kristin Sandusky. 2001. Within and between firm changes in human capital, technology, and productivity. Longitudinal Employer-Household Dynamics Technical Paper no. TP-200103. Washington, DC: U.S. Census Bureau. Abowd, John M., Francis Kramarz, and David Margolis. 1999. High wage workers and high wage firms. Econometrica 67 (2): 251–334. Abowd, John M., Paul Lengermann, and Kevin McKinney. 2003. The measurement of human capital in the U.S. economy. Longitudinal Employer-Household Dynamic Technical Paper no. TP-2002-09. Washington, DC: U.S. Census Bureau. Acemoglou, Daron. 2001. Technical change, inequality and the labor market. Journal of Economic Literature 40 (1): 7–72. Becker, Gary S. 1964. Human capital: A theoretical and empirical analysis, with special reference to education. New York: Columbia University Press. Bond, Stephen R., and Jason G. Cummins. 2000. The stock market and investment in the new economy: Some tangible facts and intangible fictions. Brookings Papers on Economic Activity, Issue no. 1:61–124. 21. Some differences in measurement methodology appear in the literature. Our method most closely follows that of BHY. One difference is that Hall (1990) suggests adjusting the value of long-term debt for differences in the age structure of the debt. However, in the absence of firm-level information on the age structure of the debt, Hall assumes all long-term debt has a twenty-year maturity. One can readily develop scenarios in which this assumption induces substantial measurement error.
The Relation among Human Capital, Productivity, and Market Value
197
Bresnahan, Timothy F., Erik Brynjolfsson, and Lorin M. Hitt. 2002. Information technology, workplace organization and the demand for skilled labor: Firm-level evidence. Quarterly Journal of Economics 117 (1): 339–76. Brynjolfsson, Erik, Lorin M. Hitt, and Shinkyu Yang. 2002. Intangible assets: How the interaction of computers and organizational structure affects stock market valuations. Brookings Papers on Economic Activity: Macroeconomics, Issue no. 1:137–99. Burgess, Simon, Julia Lane, and David Stevens. 2000. Job flows, worker flows and churning. Journal of Labor Economics 18 (3): 473–502. Foster, Lucia, John Haltiwanger, and C. J. Krizan. 2001. Aggregate productivity growth: Lessons from microeconomic evidence. In New developments in productivity analysis, ed. Charles Hulten, Edward Dean, and Michael Harper, 303–63. Chicago: University of Chicago Press. Gore, Al. 1997. Remarks by vice president of the United States. Speech at the Microsoft CEO Summit. 8 May, Seattle, Washington. Groshen, Erica L. 1991a. Five reasons why wages vary among employers. Industrial Relations 30 (3): 350–81. ———. 1991b. Sources of intra-industry wage dispersion: How much do employers matter? The Quarterly Journal of Economics 106 (3): 869–84. Gu, Feng, and Baruch Lev. 2001. Markets in intangibles: Patent licensing. New York University Working Paper. Hall, Bronwyn. 1990. The manufacturing sector master file: 1959–1987. NBER Working Paper no. 3366. Cambridge, MA: National Bureau of Economic Research. Hall, Bronwyn. 1998. Innovation and market value. Paper presented at the National Institute of Economic and Social Research Conference on Competitiveness and Productivity, London, England, February 1998. In Productivity, Innovation and Economic Performance, ed. Ray Barrell, Geoffrey Mason, and Mary O’Mahoney, 177–98. Cambridge: Cambridge University Press, 2000. Hall, Robert. 2001. Struggling to understand the stock market. American Economic Review Papers and Proceedings 91 (2): 1–11. Haltiwanger, John, and Ron Jarmin. 2000. Measuring the digital economy. In Understanding the digital economy, ed. E. Brynjolfsson and B. Kahin, 13–33. Cambridge: MIT Press. Haltiwanger, John C., Julia I. Lane, and James R. Spletzer. 2001. Productivity differences across employers: The role of employer size, age, and human capital. University of Maryland, Urban Institute, and Bureau of Labor Statistics. Working Paper. Jorgenson, Dale W., F. Gollop, and B. M. Fraumeni. 1987. Productivity and US economic growth. Cambridge, MA: Harvard University Press. Juhn, Chinhui, Kevin M. Murphy, and Brooks Pierce. 1993. Wage inequality and the rise in returns to skill. Journal of Political Economy 101 (3): 410–42. Kremer, Michael, and Eric Maskin. 1996. Wage inequality and segregation by skill. NBER Working Paper no. 5718. Cambridge, MA: National Bureau of Economic Research. Lane, Julia, Laurie Salmon, and James Spletzer. 2002. Establishment wage differentials. Washington, DC: The Urban Institute. Mimeograph. Lengermann, Paul. 2002a. Comparing new and traditional measures of human capital. University of Maryland, Department of Economics. Manuscript. ———. 2002b. Is it who you are, where you work, or with whom you work? Reassessing the relationship between skill segregation and wage inequality. Ph.D. Diss. College Park, MD: University of Maryland.
198
John M. Abowd et al.
Longitudinal Employer-Household Dynamics (LEHD) Program. 2002. Longitudinal employer-household dynamics program: Employment dynamics estimates, versions 2.2 and 2.3. Longitudinal Employer-Household Dynamics Technical Paper no. TP-2002-05. Washington, DC: U.S. Census Bureau.
Comment
Katharine G. Abraham
Understanding how human capital affects output, productivity, and firm performance is key to understanding today’s knowledge economy. Although observable human capital measures such as years of education and years of experience help to explain the systematic variation in earnings across individuals, they also leave a great deal unexplained. Abowd and coauthors use a rich longitudinal data set containing information on the universe of workers and establishments in six states to identify persistent differences in earnings across individuals that cannot be explained by observable characteristics or by fixed employer effects. Together with the estimated value of a worker’s labor market experience, the individual-specific component of the earnings residual is treated as a measure of human capital. Both components of human capital—along with measures of the way that people with different kinds and amounts of human capital are grouped within establishments and firms—turn out to be related in interesting ways both to productivity and to the market value of enterprises. Abowd and coauthors take pains to describe their work as an exploratory analysis, and the approach they have taken certainly strikes me as sensible. It nonetheless seems worthwhile to think a bit about the underpinnings of the measurement strategy the authors have adopted. Start with the wage equation that provides the coefficient estimates used to construct their human capital variables: (1)
wijt i xit J(i,t) εijt
In this equation, wijt is the log of quarterly earnings for person i employed by firm j at time t, i is a fixed person effect, xit is a vector of time-varying individual variables that capture labor market experience, and J(i,t) is the fixed firm effect for whatever firm j employs person i at time t. Differences in wage rates across individuals captured by i are assumed to reflect corresponding differences in productivity. Compared to the variables included in the typical human capital wage equation, the i s have the potential to capture far more of the individual variation in human capital, though withKatharine G. Abraham is professor of survey methodology and affiliate professor of economics with the Joint Program for Survey Methodology at the University of Maryland, and a research associate of the National Bureau of Economic Research.
The Relation among Human Capital, Productivity, and Market Value
199
out providing much insight into the specific attributes responsible for this variation. One concern with the empirical implementation of equation (1) lies with the use of quarterly earnings information as a basis for drawing conclusions about the distribution of human capital. Available evidence suggests that those with higher earnings capacities tend to work longer hours (see, for example, Rones, Ilg, and Gardner 1997). If quarterly earnings are higher for more productive workers not only because their hourly wages are higher but also because they work more hours, the estimated i will reflect both the higher quality and the greater quantity of these workers’ labor services. Given the “black box” nature of the estimated i , one might also wonder whether factors other than differences in individuals’ human capital that could affect wages have been controlled for properly. A strength of equation (1) is the inclusion of establishment effects as a determinant of observed wage rates. The existence of employer effects on wages has been well documented (see, for example, the seminal work by Blau 1977 and Groshen 1991), and standard human capital equations that relate wages to individual characteristics but ignore employer effects likely produce biased coefficient estimates. But high-wage employers may not offer equally large wage premiums to all groups of workers, and this complicates the interpretation of the wage equation coefficients the authors have estimated. To illustrate, consider a simple hypothetical economy consisting of equal numbers of type 1 and type 2 firms, each of which employs equal numbers of low-skilled and high-skilled workers. Type 1 firms pay no wage premiums; type 2 firms pay no wage premiums to low-skilled workers, but wage premiums of 10 percent to high-skilled workers. Workers move back and forth between the two types of firms, spending half their careers with each. If data for these firms and workers were used to estimate a wage equation like equation (1), the estimated coefficients would imply firm effects for type 2 firms of 5 percent rather than the zero percent that would be estimated using data for low-skilled workers or the 10 percent that would be estimated using data for high-skilled workers. And the estimated i s would imply person effects on wages for high-skilled workers that were 5 percentage points larger than would be estimated in a model that allowed for firm effects that varied by worker skill level. There is a limited body of empirical work that has explored employers’ internal wage structures—and specifically the question of whether employer wage premiums differ across groups of workers—using data sets that contain establishment-level observations on pay by occupation. Data of this sort can be used to estimate models that allow for occupation, establishment, and the interaction between the two to affect wages: (2)
wjk kOk j Ej jkOk Ej .
200
John M. Abowd et al.
In this equation, wjk is the log of the average wage rate for workers in occupation k at establishment j, Ok is a vector of occupational dummies, and Ej is a vector of establishment dummies. If establishment wage premiums were completely uniform across occupations, all of the jks would be equal to zero; if they sometimes vary by occupation, some of the jks will be nonzero and the job cell (occupation by establishment) effects on wages will explain some portion of wage variation over and above the portion explained by the occupation and establishment dummies. Lane, Salmon, and Spletzer (2001) use data from the Bureau of Labor Statistics Occupational Employment Survey to fit a model of this general sort. As noted by Abowd and coauthors, the Lane, Salmon, and Spletzer findings document a propensity for establishments that pay high wages to their accountants, for example, to pay high wages to their janitors too. But the findings also show variation in ln(wages) associated with differences in the size of establishment wage effects by occupation that are roughly twothirds as large as the variation in ln(wages) attributable to the establishment main effects.1 Using National Compensation Survey data on average earnings by detailed occupation within establishments, Kaplan and Pierce (2000) find a correlation of just 0.33 between ln(average wages) for pairs of detailed occupations within the same establishment. Groshen (1991) analyzes Industry Wage Survey data and finds that, in her sample of six manufacturing industries, occupation-establishment interactions explain an average of only about 10 percent of the variation in observed individual wages, as contrasted with an average of about 30 percent for the establishment main effect. But Groshen and Levine (1998), using data from an employer wage survey conducted by the Federal Reserve Bank of Cleveland that covers establishments in a broader array of industries but may be less representative in other ways, find that in recent years the standard deviation of the ln(wage) component associated with occupation-establishment interactions is roughly equal in magnitude to that associated with the establishment main effects. The evidence just discussed does not necessarily imply that differences in internal wage structures across establishments are a source of bias in the estimated i s. Indeed, the occupation-establishment interaction effects documented in the research I have described may reflect the characteristics of the individuals who hold jobs in particular occupations within particular establishments rather than true internal wage structure effects, and the available empirical evidence does not speak to the question of whether certain groups are systematically more likely to receive wage premiums than others. But there is also reason to think that employers may pursue wage 1. For this purpose, the variance attributable to the establishment main effects is measured as the incremental variance captured by the addition of establishment dummies to a model that already contains occupation dummies.
The Relation among Human Capital, Productivity, and Market Value
201
policies that are more complex than simply paying uniformly high or uniformly low wages to all of their employees. Additional empirical research on firms’ pay-setting policies and practices could help to resolve the proper interpretation of human capital estimates derived from models like equation (1). Using their human capital measures, Abowd and coauthors estimate equations with measures of productivity and market value on the left-hand side and the human capital variables on the right-hand side. The assumption in these models is that firms choose the mix of human capital they will employ, which in turn affects their output and profitability. It seems likely, however, that the mix of human capital a firm employs is at least somewhat constrained by its history and the history of the industry to which it belongs. One clear illustration of this appears in the paper, which draws a contrast between the higher years of worker experience in the manufacturing sector as compared to the finance, insurance, and real estate sector. Given the very different histories of these two industries, I am skeptical that these differences represent a true long-run equilibrium. Manufacturing employment has been stagnant or shrinking for many years, whereas employment in the finance, insurance, and real estate sector generally has trended upward. These different employment trajectories seem very likely to have affected the age and experience distribution of the two industries’ workforces. It would not be surprising if, thirty years from now, the distribution of age and experience within a more mature finance, insurance, and real estate sector looked quite different from the distribution within the industry today. There is also a question, of course, about the direction of causality that underlies the observed relationship between the human capital a firm employs, on the one hand, and its output and profitability, on the other. The authors are interested in how human capital affects output and profitability, but there may be causality running in the opposite direction, with managers at highly productive or highly profitable firms being freer to pay the high wages that make it easier to attract higher-caliber workers. Most managers would prefer to hire more capable employees; put simply, having more capable employees makes life easier for those in charge! Managers who work for some firms may be better able than those who work for others to indulge their preferences in this regard. I am not certain how one would sort this out, but it seems worth thinking about. One notable pattern in the empirical findings is the very strong association that employing more high-i workers has with both productivity and market valuation. In table 5.5, for example, the column (B) coefficients imply that increases in the fraction of the workforce with i s above the 75th percentile are associated with increases in labor productivity four times as large as those associated with decreases in the fraction of workers with i s below the 25th percentile. And in table 5.10, the column (B) coefficients
202
John M. Abowd et al.
imply that increases in the fraction of the workforce with i s above the 75th percentile are associated with increases in a firm’s market valuation that are nearly twice as large as those associated with declines in the fraction of the workforce with i s below the 25th percentile. Because quarterly earnings data were used to estimate the i , the large implied effect of changing the concentration of high-i workers may reflect, at least in part, differences in the quantity of the labor services provided by high- as compared to lowproductivity workers. But to the extent that the observed association is robust to taking differences in work hours into account, it certainly merits further exploration. One of the more innovative features of the authors’ analysis is the use of variables designed to capture not just the average level but also the mix of different kinds of human capital to explain cross-establishment differences in productivity and cross-firm differences in market valuation. The pattern of the coefficients in column (C) of table 5.5, for example, implies that establishments with a larger fraction of workers with i s above the 75th percentile have higher labor productivity, while those with a higher fraction of workers with i s below the 25th percentile have lower labor productivity. But the positive coefficient on the interaction between these two variables suggests that a worker’s contribution to establishment productivity may not be independent of who his or her coworkers are. Future research may be able to help with making the lessons to be learned about how and why the mix of human capital matters more concrete. It might be possible, for example, to identify establishments within selected industries whose employment is highly skewed toward high-i workers, establishments whose employment is skewed toward low-i workers, and establishments that employ a mix. One then could look for differences in the way these establishments operate that could shed light on the sources of the differences in their productivity. This paper clearly is not the end of this line of research. It is to be hoped that future work, both by these authors and by others, will further explore the validity of the assumptions on which the analysis is based, the robustness of the findings, and the real-world underpinnings of the patterns observed. References Blau, Francine D. 1977. Equal pay in the office. Lexington, MA: D.C. Heath and Company. Groshen, Erica L. 1991. Sources of intra-industry wage dispersion: How much do employers matter? Quarterly Journal of Economics 106 (3): 869–84. Groshen, Erica L., and David I. Levine. 1998. The rise and decline (?) of U.S. internal labor markets. Research Paper no. 9819. New York: Federal Reserve Bank of New York. Kaplan, David S., and Brooks Pierce. 2000. Firm-wide versus establishment-specific
The Relation among Human Capital, Productivity, and Market Value
203
pay practices. Paper presented at Labor Studies Program meeting, National Bureau of Economic Research. 31 July–2 August, Cambridge, Massachusetts. Available at http://www.nber.org/~confer/2000/si2000/kaplan.pdf. Lane, Julia, Laurie Salmon, and James R. Spletzer. 2001. Earnings determination: Contributions from the occupation and the establishment. Paper presented at the sixth annual meeting of the Society of Labor Economists. 20–21 April, Austin, Texas. Available at http://gsbwww.uchicago.edu/research/workshops/ wae/lanesole.pdf. Rones, Phillip L., Randy E. Ilg, and Jennifer M. Gardner. 1997. Trends in hours of work since the mid-1970s. Monthly Labor Review 120 (4): 3–14.
6 Measuring Organizational Capital in the New Economy Sandra E. Black and Lisa M. Lynch
6.1 What Is Organizational Capital? A growing body of literature over the past decade suggests that a firm’s organizational structure or capital can contribute in significant ways to the productive capacity of a firm. But, as with other intangible assets, there is no consensus definition of what this organizational capital is, how to measure it, or how best to quantify its contribution to output (either current or future). Unlike physical capital, its value does not appear on the balance sheet of a firm, and when firms undertake substantial organizational change or reengineering this is typically treated as “consumption” rather than an increase in the assets of a firm. There is no “market” for organizational capital that we could use to generate a book value for it, and, unlike general human capital, it is not portable. When considering the issue of measuring intangibles, a recent Brookings Task Force chaired by Margaret Blair and Steven Wallman (2001) suggested that there are three categories of intangible assets, each with more measurement problems than the next—the easiest category exists for assets Sandra E. Black is assistant professor of economics at the University of California, Los Angeles (UCLA), and a faculty research fellow of the National Bureau of Economic Research. Lisa M. Lynch is the William L. Clayton Professor of International Economic Affairs at the Fletcher School of Law and Diplomacy, Tufts University, and a research associate of the National Bureau of Economic Research. This research was supported in part by the National Science Foundation Program on Innovation and Organizational Change. The research in this paper was conducted while the authors were Census Bureau Associates at the UCLA and Boston Research Data Centers. Research results and conclusions are those of the authors and do not necessarily indicate concurrence by the Bureau of the Census. This paper has been screened to ensure that no confidential data are revealed. The authors would like to thank Kathryn Shaw for her informative comments on an earlier draft of this chapter.
205
206
Sandra E. Black and Lisa M. Lynch
that can be owned and sold, the next category is for those assets that can be controlled by the firm but not separated out and sold, and finally, the category with the most difficult measurement issues contains assets that may not be wholly controlled by the firm. Intangible assets such as copyrights, brand, and trade names would be in the first category, the relatively easily addressed category of measurement problems, since they can be bought and sold. However, the degree of control the firm has over assets such as the design of production processes, human capital, relationship capital, and organizational capital varies along with the ability of the firm to “sell” these assets. As a result, they are much more difficult to measure in the usual accounting sense. Before we can discuss how to improve the measurement of organizational capital, we must first establish a working, albeit crude, definition of organizational capital.1 To do this we have chosen to focus on some of the elements of organizational capital that have been shown, in both theoretical and empirical studies, to be associated with higher productivity for firms and/or higher wages for workers. This is not meant to be an exhaustive list of all the elements of organizational capital but rather a starting point that is representative and tractable. We divide organizational capital into three broad components—workforce training, employee voice, and work design (including the use of crossfunctional production processes). While we will discuss these categories separately, it is important to note that there are important links and synergies between each of these categories that contribute to the overall value of organizational capital within a firm. Although training is usually thought of in the context of human capital, employer-provided training is an important component of workplace organization and organizational capital. We assume, for the sake of simplicity, that education decisions are primarily individual based and made independently from the employment relationship. But workplace training is a joint decision undertaken by the worker and the firm to invest in additional skills training after an employment relationship has begun. This workforce training, along with the education a worker brings to a job, raises the productive capacity of a firm. Training demands of a firm are not limited to the introduction of new technology; as new organizational structures such as teamwork are put in place, this increases the need of workers to acquire additional training to help them function in a more interactive group environment. At the same time, organizational capital may interact 1. For a different approach to and definition of organization capital see Atkeson and Kehoe (2002). They conclude that nearly half of the output in manufacturing that is not accounted for by payments to labor and capital could be attributed to organization capital. They also conclude that the value of this organization capital is roughly two-thirds the value of physical capital. They model the acquisition of organization capital as coming from endogenous learning-by-doing so it is embodied in the firm and jointly produced with measured output. But their empirical treatment of organization capital is based on plant-specific productivity and age rather than any actual workplace practices.
Measuring Organizational Capital in the New Economy
207
with human capital, and the ability of a firm to undertake organizational change may be a function of the human capital of its workforce. Finally, spillover effects even for “specific training” may be much larger depending on the organizational structure of a firm in which these investments in human capital are made. For all of these reasons, then, we include workforce training under our umbrella of organizational capital. The second component of organizational capital is employee voice. By this we mean those organizational structures that give workers, especially nonmanagerial workers, input into the decision making associated with the design of the production process and greater autonomy and discretion in the structure of their work. Traditional forms of work organization are very task specific; each production worker has a specific task to complete, and once they learn how to accomplish the task, there is little independent thought involved. However, newer forms of organization involve giving employees, specifically lower-level production workers, more input into the production process and greater opportunities to improve efficiency. As employee voice increases, firms are better able to tap into the knowledge of nonmanagerial workers. There is a large continuum of practices associated with employee voice. These range from the employee suggestion box in the lunch room, to employees’ being consulted individually about their views, to individual jobenrichment schemes, to employees’ being consulted in groups, and finally to self-managed teams where production employees work in a semiautonomous setting. In addition, all of this can take place in the context of unionization, which may serve to help or hinder this communication process. As discussed by Malcomson (1983), agreements made between managers and workers may not be legally enforceable, so the presence of unions can address incentive compatibility problems that may arise at the workplace. In addition, negotiations that management undertakes with workers about the introduction of new workplace practices are less expensive if the company has to deal only with union specialists rather than each individual worker. Our third component of organizational capital is work design, including the use of cross-functional production processes that result in more flexible allocation and reallocation of labor in the firm. Examples of practices in this component include reengineering efforts that may involve changing the occupational structure of the workplace (including increasing the number of technical workers), the number of workers per supervisor, the number of levels of management within the firm, the existence and diffusion of job rotation, and job share arrangements. We also include in this component methods by which firms monitor their practices relative to others, such as benchmarking. Some of the changes we see in work design are associated with the introduction and diffusion of information technologies within the firm. For example, as new technologies reduce the cost of lateral communication, we see firms using these technologies to facilitate greater
208
Sandra E. Black and Lisa M. Lynch
communication between and across workers, both managerial and nonmanagerial. Monitoring technologies can also be used to reduce the number of supervisors required in the production process. So there are possible complementarities between this and other dimensions of organizational capital, as well as with investments in physical capital. While it is not a type of organizational capital per se, incentive-based compensation plays an important role in organizational capital. More generally, if we observe wage premiums being paid in employment situations where there is a higher level of organizational capital we might, in a very crude way, think of this premium as another way to “price” the value of the asset we are calling organizational capital. Employers will use wages and other forms of compensation to try to hold on to this relational asset. In addition to the basic wage, employers can also pay workers by piece rate, stock options, profit sharing, and bonuses related to achieving specific production targets. These are all tools that firms have used to, in part, realign workers’ interests toward those of shareholders. But when workers are asked to come forward with ideas that would improve the production process but may also put their own jobs at risk, they must be given an incentive to do this. Putting a portion of compensation “at risk,” especially for nonmanagerial employees, can have an important effect on the amount and type of relationship capital that exists within a firm and can have a large effect on workers’ discretionary effort. Recent work by Boning, Ichniowski, and Shaw (2001) has found strong evidence of complementarity between employee voice and incentive pay. So while incentive-based pay is not organizational capital per se, it is an important glue that holds the organizational capital together and keeps it within the firm. Therefore, in the remaining discussion on measurement issues we will also include this dimension of workplace practices. Our three components are not an exhaustive list of possible elements of organizational capital. For example, Kruse and Blasi (1998) identify employment security and recruitment and selection systems as important components of what they label high-performance work practices. But as with compensation, employment security and recruitment and selection systems are probably best thought of as “glue” rather than organizational capital. The next section summarizes some of the empirical work on the impact organizational capital has had on firms and workers. 6.2 The Importance of Organizational Capital in the New Economy: Empirical Evidence The importance of correctly measuring organizational capital stems from the evidence on its impact along a number of dimensions—labor productivity, wages, and labor demand. In addition, there is evidence of links
Measuring Organizational Capital in the New Economy
209
between organizational capital, human capital, and physical capital, especially information and communication technologies. In terms of productivity, there is an extensive literature documenting the relationship between organizational capital and firm performance.2 Examples of intra-industry studies include work by Ichniowski, Shaw, and Prennushi (1997), Arthur (1994), Kelley (1994, 1996), Bailey (1993), and Dunlop and Weil (1996). By examining human resource practices associated with one specific production process it is possible to greatly reduce problems of underlying heterogeneity of production processes. Most of the intra-industry studies conclude that the adoption of a coherent system of new human resource management practices such as flexible job definitions, cross-training, and work teams, along with extensive reliance on incentive pay, results in substantially higher levels of productivity than more traditional human resource management practices. Another research strategy is to examine a more representative crosssectional sample of firms to determine the impact of workplace practices on broader measures of performance such as productivity or profitability. Examples of this strategy include Black and Lynch (2001, 2004), Bartel (1989), Bresnahan, Brynjolfsson, and Hitt (2002), Caroli and Van Reenen (2001), Ichniowski (1990), Huselid (1995), Huselid and Becker (1996), and Delaney and Huselid (1996). All of these studies have found a correlation between human resource management systems and business performance as measured by labor productivity, Tobin’s q, or present-value gain in cash flow and firm market value. Many of these have also found evidence of the existence of synergies among workplace practices: the total impact is greater than the sum of the parts.3 Given that this volume is interested in measurement issues from the perspective of the economy as a whole, is it possible to translate some of the micro-based evidence on the impact of organizational capital on labor productivity to the economy more generally? In particular, since a large fraction of the output growth in manufacturing in the 1990s was driven by increases in multifactor productivity, can any of this be potentially attributed to organizational capital? In Black and Lynch (2004), we use our estimates of the impact of workplace practices on labor productivity in manufacturing (done using establishment-level data) in a growth-accounting framework in order to see, roughly, how much of the overall growth in manufacturing during 1993–95 our measures of workplace innovation could 2. See Ichniowski and Shaw (2003) for a thorough review of this literature. 3. The theoretical work of Milgrom and Roberts (1995) and Kandel and Lazear (1992), along with the empirical studies mentioned above, are important contributions in this area. Milgrom and Roberts argue that the impact of a system of human resource practices will be greater than the sum of its parts because of the synergistic effects of bundling practices together. Kandel and Lazear argue that introducing a profit-sharing plan for all workers in a firm may have little or no impact on productivity unless it is linked with other practices that address the inherent free-rider problem associated with corporatewide profit-sharing plans.
210 Table 6.1
Sandra E. Black and Lisa M. Lynch Compound average annual rates of growth in output and the contribution of factor inputs and multifactor productivity, manufacturing (% per year), 1993–96
Output Combined inputs (includes capital, labor, and materials) Multifactor productivity Contribution of workplace practices Remaining residual
Bureau of Labor Statistics
Black and Lynch (2004)
4.2
4.7
2.3 1.9
3.2 1.6 1.4 0.2
Sources: Bureau of Labor Statistics, Multifactor Productivity Trends, 1998, released September 21, 2000 and authors’ own calculations from the EQW-NES first and second round cross sections with a 1 percent trim as presented in Black and Lynch (2004).
account for. We present these calculations again in table 6.1 along with the figures reported by the Bureau of Labor Statistics (BLS) over the same time period to “benchmark” our findings with their numbers. As seen in this table, the sample of manufacturing establishments that we used in our empirical work (the Educational Quality of the Workforce survey [EQW]) experienced very similar output growth over the period 1993–96 as reported by the BLS for the country as a whole. The BLS reports that output growth in manufacturing grew at a compound average annual growth rate of 4.2 percent between 1993 and 1996. We find a rate of 4.7 percent using the EQW data over the same period. The BLS reports that combined inputs (capital, labor, and materials) grew 2.3 percent over this period, and using the estimated coefficients from our empirical work (Black and Lynch 2004) as shares we find that inputs grew at 3.2 percent for the EQW sample. As a result, multifactor productivity grew 1.9 percent in the BLS reported figures and 1.6 percent using EQW data. As a rough approximation of the contribution of organizational capital, we use the coefficients on the workplace practices that come from our microlevel production function estimation to calculate the impact of workplace innovation on multifactor productivity. We find that they contributed 1.4 percentage points per year. In other words, changes in organizational capital may have accounted for approximately 30 percent of output growth in manufacturing over the period 1993–96, or 89 percent of multifactor productivity. An important caveat is that many of the components of workplace organization, such as reengineering, reflect both technological as well as organizational changes. Though the number seems high, we believe that this accounting exercise indicates that measuring organizational capital can go some way in explaining recent trends in multifactor productivity. Given that it improves productivity, it is not surprising to find that organizational capital also has an impact on workers, both in terms of their
Measuring Organizational Capital in the New Economy
211
wages and in terms of skill demand. Investments in organizational capital seem likely to benefit the workforce because workers are unlikely to contribute in the manner these practices require unless they are assured a share of the gains (Osterman 2000). Organizational changes may also require a higher level of human capital from individual workers since they need to deal effectively with increased uncertainty and responsibility (see Osterman 1994 and Lynch and Black 1998). In terms of the empirical work on the impact of organizational capital on wages, the evidence is mixed. Using data from a representative sample of employers, Osterman (2000) finds no impact of work organization practices on wages of either core workers or all workers over the period 1996–97, and Cappelli and Carter (2000) find no impact on wages of nonmanufacturing workers. In contrast, both Black, Lynch, and Krivelyova (2004) and Cappelli and Neumark (2001) find that wages of manufacturing workers increase when employers extend their usage of organizational practices such as team work. Black, Lynch, and Krivelyova also find that the impact of organizational capital has the largest effect on the wages of supervisors, production, and sales or clerical workers in the manufacturing sector. The March 2001 Industrial and Labor Relations Review is a special issue devoted to the impact of technology and work organization on wage inequality at the industry level. While not all studies in this volume were able to have access to data on organizational capital, some studies in this volume, such as Hunter and others (2001), Batt (2001), and Bailey, Berg, and Sandy (2001) find a significant relationship between workplace organizational practices and earnings. While there has been less done on the impact of organizational capital on labor demand, several recent studies have looked at this question. Theoretically, as discussed by Kremer and Maskin (1996) and Acemoglu (2000) and reviewed in the context of organizational change and wage inequality by Aghion, Caroli, and Garcia-Peñolosa (1999), when technological and organizational change takes place, skill homogeneity can increase within firms. Therefore, if technical change and organizational change are complementary activities, as argued in Bresnahan, Brynjolfsson, and Hitt (2002), and since technical progress tends to be skill biased, it may be the case that the organizational change would also be skill biased. Osterman (2000) finds that measures of organizational capital are associated with a higher probability of layoffs, even within firms that have been experiencing net gains in total employment. This is suggestive of the reallocation process hypothesized in Kremer and Maskin and Acemoglu. Caroli and Van Reenen (2001) find evidence for British and French firms that a variety of measures, consistent with our definition of organizational capital, are associated with reduced demand for unskilled workers in both countries. While the focus of this paper is not on measuring information technology, several researchers have found strong positive correlations between
212
Sandra E. Black and Lisa M. Lynch
firms’ decisions to adopt new technologies, especially information technology (IT), and additional investment in organizational capital. Bresnahan, Brynjolfsson, and Hitt (2002) find evidence of strong complementarity between several indicators of IT use, workplace organization, and the demand for skilled labor. The positive correlation between computers and workplace practices is also shown in Lynch and Black (1998) and Caroli and Van Reenen (2001). Finally, it is important to note that, while we have divided our measure of organizational capital into three components, there is an extensive literature, both theoretical and empirical, that suggests the existence of synergies in practices. For example, Athey and Stern (1998) discuss how the existence of complementarity in workplace practices implies that the adoption of one practice has externalities for adoption decisions about other practices. In addition, if practices are adopted in clusters, then some combinations of practices may occur only infrequently, making it difficult, empirically, to precisely estimate the impact of these practices on outcomes such as productivity. 6.3 How Has Organizational Capital Been Measured? Data on organizational capital and related components such as compensation, recruitment and selection methods, and employment security provisions have been collected at three levels of aggregation. The first level of aggregation is case studies of individual companies. One of the most famous examples is the 1983 Harvard Business School case study of the Lincoln Electric Company by Norman Berg and Norman Fast; another, more recent example is Ann Bartel’s (2004) study of a Canadian bank. The next level of aggregation includes detailed intra-industry studies such as the work by Ichniowski, Shaw, and Prennushi (1997) on the integrated steel industry. One of the many benefits of intra-industry data is the ability to examine organizational practices without the confounding effect of different production processes and organizational structure that are due to the production of different goods. But the focus of this paper is to examine how more nationally representative surveys of businesses have attempted to capture these measures of organizational capital. This will help identify whether or not there are some measures of organizational capital that might be relatively easy to add to nationally representative ongoing surveys of businesses. This could be especially useful from a national incomes account perspective. For this purpose we focus on micro employer surveys from the 1990s that measure the components of organizational capital we believe have been shown to be the most important in analyses of productivity, wages, and labor demand. One of the earliest surveys of workplace practices to use these measures to study their impact on labor productivity and Tobin’s q was Ichniowski (1990). He used data on personnel policies and practices taken from a 1986
Measuring Organizational Capital in the New Economy Table 6.2
213
Survey questions on training
Survey
Training questions asked
Ichniowski (1990)
Does your business have a formal employee training and development program?
Huselid (1995), Huselid and Becker (1996)
How many hours of training per year are typically received by an experienced employee (someone employed more than one year)?
Osterman (1994, 2000)
Proportion of core employees who received different types of training (e.g., off the job or cross training)
Gittleman, Horrigan, and Joyce (1998)
Variety of training incidence measures including types of training offered (basic, workplace-related, and job skills) along with reason for training (technology, skill specificity, seniority, retention)
Black and Lynch (1996, 2001, 2004), Lynch and Black (1998)
Variety of training questions on incidence of formal and informal training programs; types of training offered, including computer skills training, team work training, sales training, and new methods training; proportion of workers trained by five occupational categories; the costs of training as a share of total labor costs; does training occur off the job.
Caroli and Van Reneen (2001)
In the past three years have you trained workers aiming at specialization or trained workers aiming at multiskilling? (French data)
survey by Columbia University’s Industrial Relations Research Center covering 495 Compustat II business lines. This was a mailed survey of the Compustat sample with a 10 percent response rate. Tables 6.2–6.4 summarize some of the questions used from this survey divided into the three components of organizational capital we are looking at—training, employee voice, and work design. The training measure is a simple incidence of formal training. The employee voice measure is primarily the right of workers to voice complaints under some form of due process rather than any type of direct employee participation in decision making within the firm. Finally, the measure of work design collapses a variety of different work structures into one variable. In the empirical work done with these measures, they are not introduced on their own but rather are collapsed into nine clusters of practices. Mark Huselid conducted two mailed surveys of U.S. firms in 1992 and 1994. Surveys were mailed to 3,477 firms in 1992 and 3,847 firms in 1994, and employers were asked about their organizational practices in the previous year. The sample was drawn from the 12,000 publicly held firms listed in Compact Disclosure, a commercially available database containing annual 10-K reports. The overall response rate was 28 percent in 1992 and 20 percent in 1994. In addition, there was a subsample of firms that responded
214
Sandra E. Black and Lisa M. Lynch
Table 6.3
Survey questions on employee voice
Survey
Employee voice questions asked a
Ichniowski (1990)
Do you have any formal information sharing program with these groups of employees? Do you conduct attitude surveys among any of these groups of workers? Is there a formal grievance procedure or formal complaint resolution system?
Huselid (1995), Huselid and Becker (1996)
What proportion of the workforce are included in a formal information-sharing program (e.g., a newsletter)? What proportion of the workforce is regularly administered attitude surveys? What proportion of the workforce participate in Quality of Work Life (QWL), Quality Circles (QC), and/or labor management participation programs?
Osterman (1994, 2000)
Proportion of the establishment’s core employees involved in problem solving groups Proportion of the establishment’s core employees involved in teams Proportion of the establishment’s core employees involved in Total Quality Management (TQM)
Gittleman, Horrigan, and Joyce (1998)
Are there worker teams in the establishment? Is there TQM? Are there Quality Circles? Is there employee involvement in the establishment’s technology and equipment purchase decisions?
Black and Lynch (1996, 2001, 2004)
Is there TQM? Proportion of non-managerial workers meeting regularly to discuss workplace issues Proportion of workers in self-managed teams
Caroli and Van Reneen (2001)
For any of the organizational methods I will mention (including delayering, Just-in-Time, Quality Circles, and TQM) would you tell me if it is already implemented, in the process of being so, being considered, or not even thought of, in your establishment?
a
All questions were asked separately for a business’s union and nonunion production workers.
to both surveys, allowing for longitudinal analysis of organizational structure and performance outcomes of the firm. The survey included information on a wide range of organizational practices (shown in tables 6.2–6.4). The responses to these questions were then factor analyzed to see their impact on Tobin’s q and the gross rate of return on assets. While Ichniowski and Huselid both focused their surveys on large business lines or entire firms, Paul Osterman employed a slightly different strategy and focused on individual establishments. He conducted two telephone surveys (rather than mail) in 1992 and 1997 to compile a nationally representative sample of U.S. establishments. By directing the questions to es-
Measuring Organizational Capital in the New Economy Table 6.4
215
Survey questions on work design
Survey
Work design questions asked
Ichniowski (1990)
Does your organization use a formal job design program to integrate work content and the qualifications required of employees to perform work? If yes what type of job design do you use? (a) work simplification, (b) job rotation, (c) job enlargement, (d) job enrichment, or (e) other
Huselid (1995), Huselid and Becker (1996)
What proportion of the workforce hold jobs that have been included in a formal job analysis?
Osterman (1994, 2000)
Proportion of the core employees involved in job rotation
Gittleman, Horrigan, and Joyce (1998)
Is there job rotation?
Black and Lynch (1996, 2001, 2004)
Variety of measures including benchmarking, reengineering, number of managerial levels, percentage of workers in job rotation, job sharing
Caroli and Van Reneen (2001)
Have you in the last three years made any substantial changes in work organization or working practices not involving new plant machinery or equipment that directly affected the jobs or working practices of the manual workforce—and then asked for nonmanual workers? (for British est.) For any of the organizational methods I will mention, would you tell me whether it is already implemented, in the process of being so, being considered, or not even thought of in your establishment (included in this list is delayering, quality circles, and TQM)? (French data)
tablishments rather than corporate headquarters, he hoped to obtain more accurate information about implementation of workplace practices. The response rates to the surveys were 65 percent for 1992 (806 establishments) and 58 percent for 1997 (683 establishments). The 1997 survey also contained a subsample of 462 establishments that were also interviewed in 1992, thereby allowing for longitudinal analysis. The questions on workplace practices were restricted, however, to “core” employees within the establishment and covered the proportion of these type of workers in training, self-managed work teams, job rotation, quality circles or off-line problem solving groups, and Total Quality Management (see tables 6.2– 6.4 for more information on the types of questions asked). The BLS Survey of Employer-Provided Training (SEPT) was conducted in 1993. While the primary purpose of this survey was to collect detailed information on employer training provided or financed by private nonagricultural establishments, information on other organizational practices was also collected. Nearly 12,000 establishments were surveyed, with a response rate of 71.3 percent (7,895 establishments). In addition to answering extensive questions on types of training and training practices, 5,987 establishments provided information on the organization of work. The tar-
216
Sandra E. Black and Lisa M. Lynch
get respondent was first the training department, next the human resource department, and finally the office manager. Establishments were asked which, if any, of the following six work organization practices were in existence in 1993: worker teams, total quality management, quality circles, peer review of employee performance, employee involvement in the firm’s technology and in equipment purchase decisions, and job rotation. (See Gittleman, Horrigan, and Joyce 1998 for more details as well as tables 6.2–6.4.) The data set we have used for our own work on labor productivity, training, and wages (e.g., Black and Lynch 1996, 2001, 2004; Black, Lynch, and Krivelyova 2004; and Lynch and Black 1998) comes from the EQW National Employers Survey conducted in 1994 and 1997. The EQW National Employers Survey (NES) was first administered by the U.S. Bureau of the Census as a telephone survey in August and September 1994 to a nationally representative sample of more than 3,000 private establishments with more than 20 employees. The survey represents a unique source of information on how employers recruit workers, organize work, invest in physical capital, and utilize education and training investments. The survey oversampled establishments in the manufacturing sector and establishments with more than 100 employees. Public-sector employees, not-forprofit institutions, and corporate headquarters were excluded from the sample. The target respondent in the manufacturing sector was the plant manager and in the nonmanufacturing sector was the local business site manager. However, the survey was designed to allow for multiple respondents, so that information could be obtained from establishments that kept financial information such as the book value of capital or the cost of goods and materials used in production at a separate finance office (typically at corporate headquarters for multi-establishment enterprises). Computerassisted telephone interviewing (CATI) was used to administer each survey, which took approximately twenty-eight minutes to complete. The sampling frame for the survey was the Bureau of the Census Standard Statistical Establishment List (SSEL) file, one of the most comprehensive and up-to-date listings of establishments in the United States. By design, the survey allowed for a subset of establishments to be matched to the Census Longitudinal Research Database (LRD) that includes longitudinal information for manufacturing establishments only. The LRD, housed at the Center for Economic Studies at the Bureau of the Census, was created by longitudinally linking the establishment-level data from the Bureau of the Census’s Annual Survey of Manufacturers (ASM). The LRD data include information on shipments, materials, inventories, employment, expenditures on equipment and structures, book values of equipment and structures, and energy use (for more information on the LRD see Haltiwanger and Davis 1991). Because we are able to match the LRD with the EQW NES, we have annual establishment-level data on inputs and outputs of production for the manufacturing employers in our
Measuring Organizational Capital in the New Economy
217
survey. It is important to note that the LRD is basically the universe of all manufacturing establishments with more than 250 employees but is only a subsample of establishments with less than 250 employees. The response rate for manufacturing establishments in the 1994 EQW NES was 66 percent (1,621 establishments) and 60.6 percent (1,324 establishments) for nonmanufacturing establishments. These response rates are substantially higher than most other voluntary establishment surveys. Probit analysis (available from the authors upon request) of the characteristics of nonrespondents indicates that there was no significant pattern at the two-digit industry level in the likelihood of participating in the survey. The only businesses more likely not to participate were manufacturing establishments with more than 1,000 employees. The telephone survey was repeated again in 1997 and the overall response rate was 59 percent (4,139 establishments). There was also a subsample of 766 establishments that had been interviewed in 1994. The response rate of establishments in the longitudinal sample was 74 percent. In both surveys, a wide range of questions were asked about training and workplace practices. The training questions included not only incidence measures of whether the establishment undertook any formal training of its workers but also types of training (e.g., computer literacy, team work training, literacy training), the proportion of workers trained by five occupational categories, and the percentage of total labor costs that training expenditures represented. In terms of employee voice, the survey included questions about the proportion of nonmanagerial workers meeting regularly to discuss workplace issues, total quality management, the proportion of workers in self-managed teams, and union status. For work design, the surveys included questions about the usage of benchmarking, the percentage of workers in job rotation, the number of managerial layers, and whether the establishment had undergone any reengineering efforts. The last surveys we examine are those British and French data sets used by Caroli and Van Reenen (2001) for their work on workplace organization and skill demand. The British data are from the British Workplace Industrial Relations Survey (WIRS), an establishment-level data set that consists of a cross section of over 2,000 British establishments in 1984 and 1990. In 1984 there are a number of questions that relate to organization change, and in 1990 there is a limited follow-up that asks more basic information. In both surveys, senior managers are asked whether there has been an introduction of new plant, machinery, or equipment that includes new microelectronic technology; the introduction of new plant, machinery, or equipment not including new micro-electronic technologies; and whether there have been substantial changes in work organization or working practices not involving new plant, machinery, or equipment. A positive response to the last question would indicate changes in organizational capital. Caroli and Van Reenen attempted to verify that the change in organizational
218
Sandra E. Black and Lisa M. Lynch
capital indicated in the survey actually corresponded to changes in relevant workplace practices by examining the 1984 data, in which managers were asked in more detail what the change actually involved. In general, this change in work organization was associated with increases in responsibility and is more likely to be associated with a widening range of more interesting tasks performed by workers. The corresponding French data set is the REPONSE (Relations Professionnelles et Negociations d’Entreprise), which was constructed with reference to the British WIRS. They surveyed 2,500 establishments, asking senior managers in 1992 about industrial relations and organization in 1989–92. This survey includes more detailed information on workplace organization, including explicit questions on delayering (removing one or more managerial levels), quality circles, and total quality management. (See Caroli and Van Reenen 2001 for more details on both the British and the French surveys.) It is clear that there is no one way that has been used to conduct these surveys. All of these surveys target different respondents, use a range of sampling frames, target different levels of the firm (from the establishment to business lines to the firm as a whole) for measures of organizational practices, utilize different methods to conduct the survey (mail or telephone), and have a range of response rates. The next section of the paper summarizes some of measurement problems that arise from this range of strategies to collect information on organizational capital. 6.4 Measurement Issues There are a number of issues regarding data collection of organizational capital; however, many of these issues are not unique to organizational capital. One of the more basic questions is what is the appropriate business unit to study when considering organizational structure. Does each firm have a particular organizational structure, or is it even deeper than that, and does organizational structure vary within the firm across establishments? Interestingly, there has been little focus on this question, with a variety of studies using different units of observation. Bartel (1989) uses a data set in which the business line is the unit of observation, Huselid (1995) and Huselid and Becker (1996) use firm-level data, while Black and Lynch (1996, 2001, 2004) use establishment-level data. Once the appropriate unit of observation is determined, the next question becomes who should be surveyed. Responses obtained from workers within a firm might be different from those obtained from managers, which in turn may be different from responses from human resource workers. In the training literature, there is evidence that firms report that significantly more training is given by firms than is actually received by workers when workers are queried (see, e.g., Barron, Berger, and Black 1997, 1999). Gen-
Measuring Organizational Capital in the New Economy
219
erally, the literature to date has focused on employer surveys under the assumption that employers have a better knowledge of the workplace structure and training that is being implemented. In addition, recent data collection efforts have focused on the human resource officer or training manager as the target respondent for questions pertaining to organizational structure. For smaller firms, this individual may be sufficiently knowledgeable to also answer questions on the financial aspect of the firm. However, with larger firms and perhaps more detailed surveys, as was the case in the EQW NES, there may be multiple target respondents, with the financial officer answering questions on investment and output and the human resource officer answering questions on organizational structure. Other issues that make organizational capital hard to measure deal with its intangible nature. The ability to calculate the depreciation rate of intangible goods has long eluded economists. High turnover could cause organizational capital to depreciate very quickly, whereas strong firm attachment could slow depreciation. When considering the best way to measure organizational capital, evidence suggests that incidence of activities is not sufficient. It is crucial to know not only whether practices were implemented but also how these practices are diffused both across employees or occupations and among employees within occupations. In addition, it is vital to know how long the practices have been in effect and what the start-up costs were: both direct, in terms of dollars spent by the firm, and indirect, in terms of worker time are found to affect wages, productivity, and skill demand. Finally, it is important to know how are these practices being used together. While the previous section of this paper summarized overall response rates to surveys that asked questions about organizational practices, the overall response rates do not reveal variations in response rates to specific types of questions included in these surveys. Sometimes employers are confused by jargon that is unfamiliar to them or are concerned that answering a question may reveal proprietary information. Or employers may simply have a difficult time coming up with an accurate answer in the context of a short telephone or mailed survey. Therefore, it is important to understand which types of questions employers have an easier time responding to. In order to understand the efficacy of recent efforts to characterize workplace organization, it is useful to “benchmark” our relative success of getting responses to measures of organizational capital with more traditional questions of employer practices. Table 6.5 presents response rates for what we generally consider to be standard production function variables, broken down by manufacturing and nonmanufacturing and then further broken down by firm size, using our 1994 EQW NES survey. As one can see, response rates vary significantly across variables and firm size. Interestingly, it is the larger establishments that have lower response rates; large nonmanufacturing firms have the lowest response rates across
54
60 63 51 47
65 71 59 58
76 73 67 59
82 81 78 68
62
64
Capital
73
Total sales
50 56 47 36
45
67 69 60 55
59
Materials
92 93 89 82
88
88 92 93 86
88
Percent managers
92 93 89 81
88
88 92 93 86
88
Percent supervisors
91 93 88 79
86
88 91 93 85
87
Percent production/ frontline workers
Response rates for standard production function variables, 1994 EQW-NES survey
Nonmanufacturing total By firm size 1–49 50–99 100–249 250
Manufacturing total By firm size 1–49 50–99 100–249 250
Table 6.5
91 93 88 79
87
88 91 93 85
87
Percent technical workers
91 93 89 79
86
88 91 93 85
87
Percent clerical/ sales workers
Measuring Organizational Capital in the New Economy
221
the board. Though this may seem surprising, it may be the case that larger employers are more restricted in terms of the information they are able to release without approval from corporate headquarters. This limitation highlights the importance of allowing for multiple respondents in the survey. In terms of specific variables, information on materials used generates the highest nonresponse rate, with only 59 percent of manufacturing firms and 45 percent of nonmanufacturing firms responding. Second is capital, with a slightly higher (64 percent manufacturing, 54 percent nonmanufacturing) response rate. Establishments seem most able or willing to respond to questions about employment and the breakdown of the labor force, with response rates in each occupation category of about 87 percent. These response rates provide a benchmark with which one can consider response rates for survey questions on organizational capital. Table 6.6 provides evidence on response rates for questions regarding training. The top panel shows response rates for types of training, measuring incidence alone. It is clear that these questions are relatively easy for firms to answer, and these factors have been shown to be important factors in production function estimation (Black and Lynch 1996). The bottom panel of table 6.6 shows that, when one moves beyond the incidence of training, response rates begin to fall. An important factor to measure is the cost of doing training; however, response rates in column (1) suggest that firms, particularly large firms, are not willing or able to answer these questions. In contrast, it is the larger firms in manufacturing industries who seem best able to answer the number of workers trained by occupation; the smallest firms have very low response rates. Among nonmanufacturing firms, the opposite is true, and response rates drop for the largest firms. It is clear that, when training is measured along these dimensions, it is more difficult to get information from the establishments. However, it is particularly important to do so in order to build up some measure of the stock of additional human capital being added to the firm (similar to new capital investment). And though these intensity measures are more difficult to obtain than simple incidence measures, response rates to these questions are still higher than those for standard capital and materials measures. Table 6.7 focuses on another aspect of organizational capital: employee voice. Commonly used measures include the existence of a total quality management (TQM) system, the percentage of production or frontline workers meeting in groups, and whether or not an establishment is unionized. As table 6.7 shows, response rates to all of these questions are relatively high, averaging a bit more than 90 percent. These variables have repeatedly been shown to have important impacts on wages as well as productivity. Establishments also appear to be able to answer questions about the organization of the workplace. As table 6.8 shows, response rates for questions on the organization of the workplace are high, both in manufacturing and nonmanufacturing, large and small establishments. Again, the
Nonmanufacturing total By firm size 1–49 50–99 100–249 250
Manufacturing total By firm size 1–49 50–99 100–249 250
99 99
99 100
73 75 73 75 69
70 74 57 51
57 75 76 76
84 82 64 61
61
74
No. of managerial workers trained
99 99
New methods training
67
Cost of training (as a share of labor costs)
Computer literacy training
Existence of formal training
Response rates for training, 1994 EQW-NES survey
Manufacturing total Nonmanufacturing total
Table 6.6
99 99
73 73 74 61
68
55 74 72 71
70
97 97
Percent of training done during work hours
74 74 74 65
71
57 75 74 73
71
No. of production/front line workers trained
Teamwork training
No. of technical workers trained
99 99
Cross-training
75 77 76 65
71
57 76 74 73
71
No. of clerical/sales workers trained
97 97
Percent of training done in house
Measuring Organizational Capital in the New Economy Table 6.7
223
Response rates for employee voice, 1994 EQW-NES survey Total Quality Management
Percentage of production workers meeting in groups
Unionized
Manufacturing total By firm size 1–49 50–99 100–249 250
93
90
93
90 95 93 93
89 94 92 89
90 94 94 93
Nonmanufacturing total By firm size 1–49 50–99 100–249 250
92
87
92
92 93 93 91
91 92 90 81
93 92 92 90
high response rates are particularly important given that these variables have also been shown to have an impact on productivity and wages in empirical work. On the compensation side, there is a bit more variation in response rates (see table 6.9). Profit sharing appears to be relatively easy for firms to answer, with response rates hovering around 90 percent in all occupations except technicians, which have slightly lower response rates (particularly in nonmanufacturing and small firms). However, response rates drop significantly when one looks at wages. Again, technicians have the lowest response rates among the occupations. Although response rates seem much lower, it is important to note that they are still significantly higher than those for more commonly collected variables such as capital and materials. As with the more commonly collected variables from table 6.5, response rates fall for larger firms. Responses to questions, though necessary, are not sufficient. If all firms respond to a question but there is no variation across firms, it is difficult to impossible to identify the impact of that practice on establishment or worker outcomes. Given that we know which questions firms are best willing or able to answer, it is next important to examine how much variation there is in these practices, both across firms and over time. For example, if all firms indicate that they have some informal training in place (incidence), we would not be able to disentangle the effect of informal training on outcome measures. Tables 6.10 through 6.13 present weighted means for a variety of workplace practices across manufacturing and nonmanufacturing firms by firm size. Table 6.10 shows the incidence of different types of training and then the cost of training, the percentage of training done during work hours, and the percentage of training done in-house. Among types of training,
90 92 90 89 88
93 92 90 84
91 92 92 92
87 93 93 90
88
92
No. of managerial levels
90
Percentage of nonmanagerial workers involved in job rotation
91 92 90 84
88
88 94 93 91
91
Percentage of nonmanagerial workers in self-managed teams
Response rates for workplace organization measures, 1994 EQW-NES survey
Nonmanufacturing total By firm size 1–49 50–99 100–249 250
Manufacturing total By firm size 1–49 50–99 100–249 250
Table 6.8
92 92 91 89
90
90 95 94 92
92
Participate in benchmarking
94 92 92 91
92
91 95 94 93
93
Flextime
93 93 92 90
92
91 95 93 93
93
Allow job-sharing
Nonmanufacturing total By firm size 1–49 50–99 100–249 250
Manufacturing total By firm size 1–49 50–99 100–249 250
Table 6.9
88
87 89 89 88
90 90 90 89
88 94 93 91
88 94 92 91
90
91
Supervisors
91
Managerial workers
76 69 75 83
77
71 82 85 89
86
Technical workers
89 87 89 88
88
87 94 93 91
91
Clerical/Sales workers
Profit sharing for:
88 87 89 85
87
88 93 92 90
91
Production/Front line workers
Response rates for compensation measures, 1994 EQW-NES survey
71 76 67 56
66
72 72 69 60
64
Managerial workers
87 67 64 54
60
69 76 72 62
66
Supervisors
76 40 43 47
42
39 60 61 61
59
Technical workers
Wages
89 72 70 62
66
72 79 75 67
70
Clerical/Sales workers
62 69 68 53
61
76 81 79 73
75
Production/Front line workers
49 43 48 62 71
77 82 92 92
48 63 72 85
62 79 86 92
82
62
Computer literacy training
76
Existence of formal training
73 75 89 95
77
60 67 84 90
71
New methods training
61 76 76 81
68
62 64 78 84
69
Cross-training
52 52 66 76
55
39 53 56 77
52
Teamwork training
.037 .077 .068 .07
.05
.035 .037 .047 .052
.04
Cost of training (as a share of labor costs)
Summary statistics for training measures, 1994 EQW-NES survey (weighted means)
Nonmanufacturing total By firm size 1–49 50–99 100–249 250
Manufacturing total By firm size 1–49 50–99 100–249 250
Table 6.10
52 63 73 62
58
43 58 59 64
53
Percentage of training done during work hours
44 57 60 64
51
35 42 46 51
42
Percentage of training done in house
Measuring Organizational Capital in the New Economy Table 6.11
227
Summary statistics for employee voice measures, 1994 EQW-NES survey (weighted means) Total Quality Management
Percentage of production workers meeting in groups
Unionized
Manufacturing total By firm size 1–49 50–99 100–249 250
42
40
21
28 41 52 66
38 42 37 50
12 20 31 35
Nonmanufacturing total By firm size 1–49 50–99 100–249 250
36
56
11
33 36 38 52
62 55 44 41
8 8 18 24
smaller firms tend to do less training and there is considerable variation across firm size (see Lynch and Black 1998 for a review). There is also substantial variation across types of training, with fewer firms, on average, providing computer literacy training and more firms providing new methods training. There is also substantial variation across the costs of training. Consistent with the idea that large firms provide more training, training is a higher share of total labor costs for large firms, and there is substantial variation both across firm size and between manufacturing and nonmanufacturing firms. Finally, there is substantial variation in the timing and location of training, with small firms more likely to outsource and provide training outside of work hours. These variables have also been shown to have an association with establishment productivity (see Black and Lynch 1996). Table 6.11 shows that there is even more variation, relatively speaking, when one looks at employee voice. Among manufacturing establishments, small establishments show much less employee voice: they are less likely to have a system of TQM, a lower percentage of production workers meet in groups, on average, and they are less likely to be unionized. These numbers gradually increase as establishment size grows. In nonmanufacturing, the situation is somewhat different. While many establishments have implemented a TQM system, the smaller establishments are the ones that are most likely to have a larger share of nonmanagerial workers meeting in groups. Although unionization numbers are low across the board in nonmanufacturing, it is not surprising that larger firms are much more likely to be unionized. Among different measures of the organization of the workplace in table 6.12, there is far less variation in incidence and diffusion by establishment
2.1 1.6 2.5 2.7 3.6
21 15 16 8
1.6 2.0 2.3 2.7
20 14 16 18
18
2.0
No. of managerial levels
17
Percentage of nonmanagerial workers involved in job rotation
14 11 13 9
13
12 12 11 15
12
Percentage of nonmanagerial workers in self-managed teams
20 27 23 49
24
14 19 35 43
24
Participate in benchmarking
Summary statistics for workplace organization measures, 1994 EQW-NES survey (weighted means)
Nonmanufacturing total By firm size 1–49 50–99 100–249 250
Manufacturing total By firm size 1–49 50–99 100–249 250
Table 6.12
39 47 38 49
42
29 20 22 28
25
Flextime
41 32 27 32
36
30 20 18 18
23
Allow job-sharing
Measuring Organizational Capital in the New Economy Table 6.13
229
Summary statistics for compensation measures, 1994 EQW-NES survey (weighted means) Profit sharing for: Managerial workers
Technical workers
Clerical/Sales workers
Production/Front line workers
Manufacturing total By firm size 1–49 50–99 100–249 250
66
49
55
49
61 65 71 75
44 53 48 58
51 56 55 52
52 51 39 52
Nonmanufacturing total By firm size 1–49 50–99 100–249 250
72
42
46
46
72 69 73 77
43 36 41 53
47 41 49 51
48 39 53 41
size. For manufacturing establishments, there is little variation in the percentage of nonmanagerial workers involved in job rotation, with the number centered around 17 percent. The number of managerial levels does increase as the establishment gets bigger, both in manufacturing and nonmanufacturing. Larger establishments are more likely to participate in benchmarking, while smaller establishments are slightly more likely to allow job sharing. There is little relationship between establishment size and the availability of flextime (though nonmanufacturing establishments are more likely to have it as a whole) or the percentage of nonmanagerial workers in self-managed teams. Finally, as shown in table 6.13, among measures of compensation, the story is the same for both manufacturing and nonmanufacturing establishments. There is little relationship between establishment size and profit sharing for clerical or sales workers or production or front line workers. However, larger establishments are more likely to offer profit sharing for managerial and technical workers. In addition to differences across types of establishments, there are also differences over time. If one tries to identify the impact of organizational capital by focusing on changes within establishments over time (in order to eliminate unobserved establishment fixed characteristics), it is essential that there be significant variation over time. This suggests that one should consider not only the questions asked but the frequency with which establishments are surveyed. Table 6.14 examines the variation over time in a number of workplace practices when we consider changes between 1994 and 1997 in the matched panel of establishments in the EQW NES. In this
1 81 18
1 85 15
Nonmanufacturing % increased % stayed the same % decreased
Unionized
41 37 22
37 36 27
Percentage of production workers meeting in groups
7 65 29
10 60 30
Participate in benchmarking
26 59 15
33 43 25
Percentage of nonmanagerial workers involved in job rotation
39 26 35
35 35 30
No. of managerial levels
Changes in organizational capital, 1994–97 (unweighted EQW-NES data)
Manufacturing % increased % stayed the same % decreased
Table 6.14
19 62 19
26 58 16
Percentage of nonmanagerial workers in self-managed teams
49 30 21
39 34 27
Percentage of nonmanagerial workers using computers
36 46 18
40 41 19
Percentage of managerial workers using computers
Measuring Organizational Capital in the New Economy
231
time period, there are a number of similarities across manufacturing and nonmanufacturing establishments. Most establishments did not change their unionization status, with only approximately 15 percent becoming deunionized during this time period and 1 percent unionizing. A significant number of establishments increased the percentage of production workers meeting in groups, while more establishments stopped the process of benchmarking than began. The percentage of nonmanagerial workers involved in job rotation increased, an equal percentage of establishments increased the number of managerial levels as decreased, and there was an unambiguous increase in the diffusion of technology, with more establishments increasing the percentage of managerial and nonmanagerial workers using computers than decreasing. It is important to note that, when considering changes over time, one is particularly susceptible to issues of measurement error, particularly when focusing on changes over a relatively short period of time. Surveying establishments too frequently exacerbates this problem. Finally, it is also important to consider the optimal breakdown of worker categories. While many data sets currently distinguish between production and nonproduction workers (for example, the longitudinal research database [LRD]), recent surveys have begun to explore the advantages of finer worker categories. In the EQW NES, workers are divided into supervisors, managers, technical workers, production workers, and clerical workers. While the distinctions may be less important for production function estimation, there may be significant benefits to the finer level of detail when considering the impact of organizational capital on wages and labor demand. 6.5 Final Recommendations This paper has tried to identify three main elements of organizational capital that have been shown to have a significant impact on productivity, wages, and labor demand. These three elements include training, employee voice, and work design. While there have been an increasing number of researchers who have attempted to measure these and other dimensions of organizational capital for the purpose of documenting its impact on the socalled “new economy,” these efforts have been uncoordinated and sporadic. Part of the reason there has been no systematic attempt to measure workplace practices over time has been a lack of consensus on what to measure, along with concerns over the costs of measurement. Our experience with training measures suggests that, while measuring the capital stock is tough, measuring the stock of training is even tougher. Evidence suggests that it may be too hard for firms to calculate a stock measure, and our best shot is to try to get at new investment in training. But we are still left with questions such as these: what is the appropriate
232
Sandra E. Black and Lisa M. Lynch
margin—any training, type of training, duration of training? How do we think about depreciation? The good news is that other measures of organizational practices, such as the percentage of workers meeting on a regular basis, unionization, and various work design elements, are much easier to obtain. Respondents have a clear idea of what is being asked, these types of questions are not time consuming, there is currently substantial variation in the data, and, importantly, they are associated with outcomes we care about. Our final question deals with the frequency of measurement: how often should you measure these elements of organizational capital? Our work suggests that this does not need to be done on a monthly basis or even a quarterly basis. For training and compensation information to be used for labor productivity analysis, an annual survey is probably sufficient. For the other components of organizational capital such as employee voice and work design, checking in every other year is likely more than enough, as these practices do not change with high frequency. To date, the literature has been clear: workplace organization matters. While these findings have been consistent, the measures of organizational capital, the unit of measurement, and the frequency of measurement have not been. Hopefully, careful consideration of the issues and problems associated with the measurement of organizational capital will help unify data collection efforts in the future, leaving us with more systematic information on firms, including measures not only of physical capital and human capital but also of organizational capital as well.
References Acemoglu, Daron. 2000. Technological change, inequality, and the labor market. NBER Working Paper no. 7800. Cambridge, MA: National Bureau of Economic Research, July. Aghion, Phillipe, Eve Caroli, and Carlos Garcia-Peñalosa. 1999. Inequality and economic growth: The perspective of the new growth theories. Journal of Economic Literature 37 (3): 1615–60. Atkeson, Andrew, and Patrick J. Kehoe. 2002. Measuring organization capital. NBER Working Paper no. 8722. Cambridge, MA: National Bureau of Economic Research, January. Arthur, Jeffrey. 1994. Effects of human resource systems on manufacturing performance and turnover. Academy of Management Journal 37:670–87. Athey, Susan, and Scott Stern. 1998. An empirical framework for testing theories about complementarity in organizational design. NBER Working Paper no. 6600. Cambridge, MA: National Bureau of Economic Research. Bailey, Thomas. 1993. Organizational innovation in the apparel industry. Industrial Relations 32:30–48. Bailey, Thomas, Peter Berg, and Carola Sandy. 2001. The effect of high perfor-
Measuring Organizational Capital in the New Economy
233
mance work practices on employee earnings in the steel, apparel, and medical electronics and imaging industries. Industrial and Labor Relations Review 54 (2): 525–45. Barron, J., M. Berger, and D. Black. 1997. How well do we measure training? Journal of Labor Economics 13 (2): 308–33. ———. 1999. The use of matched employee-employer data as a means of assessing data reliability with non-classical measurement error. In The creation and analysis of linked employer-employee data, ed. J. Haltiwanger, J. Lane, J. Spletzer, J. Theeuwes, and K. Troske, 419–38. New York: North Holland. Bartel, Ann. 1989. Formal employee training programs and their impact on labor productivity: Evidence from a human resource survey. NBER Working Paper no. 3026. Cambridge, MA: National Bureau of Economic Research. ———. 2004. Human resource management and organizational performance: Evidence from retail banking. Industrial and Labor Relations Review Batt, Rose. 2001. Explaining wage inequality in telecommunications services: Customer segmentation, human resource practices, and union decline. Industrial and Labor Relations Review 54 (2): 425–49. Blair, Margaret M., and Steven M. H. Wallman. 2001. Unseen wealth: Report of the Brookings Task Force on Intangibles. Washington, DC: Brookings Institution Press. Black, Sandra, and Lisa M. Lynch. 1996. Human capital investments and productivity. American Economic Review 86 (2): 263–67. ———. 2001. How to compete: The impact of workplace practices and information technology on productivity. Review of Economics and Statistics 83 (3): 434–45. ———. 2004. What’s driving the new economy: The benefits of workplace innovation. The Economic Journal 114 (February): F97–F116. Black, Sandra E., Lisa M. Lynch, and Anya Krivelyova. 2004. How workers fare when employers innovate. Industrial Relations 43 (1): 44–66. Boning, Brent, Casey Ichniowski, and Kathryn Shaw. 2001. Opportunity counts: Teams and the effectiveness of production incentives. NBER Working Paper no. 8306. Cambridge, MA: National Bureau of Economic Research, May. Bresnahan, Timothy F., Erik Brynjolfsson, and Lorin M. Hitt. 2002. Information technology, workplace organization, and the demand for skilled labor: Firmlevel evidence. The Quarterly Journal of Economics 117 (February): 339–76. Cappelli, Peter, and William Carter. 2000. Computer, work organization, and wage outcomes. NBER Working Paper no. 7987. Cambridge, MA: National Bureau of Economic Research, October. Cappelli, Peter, and David Neumark. 2001. Do “high-performance” work practices improve establishment-level outcomes? Industrial and Labor Relations Review 54 (4): 737–75. Caroli, Eve, and John Van Reenen. 2001. Skill-biased organizational change? Evidence from a panel of British and French establishments. The Quarterly Journal of Economics 116:1449–92. Delany, John, and Mark Huselid. 1996. The impact of human resource management practices on perceptions of performance in for-profit and nonprofit organizations. Academy of Management Journal 39:949–69. Dunlop, John, and David Weil. 1996. Diffusion and performance of modular production in the U.S. apparel industry. Industrial Relations 35 (July): 334–54. Gittleman, Maury, Michael Horrigan, and Mary Joyce. 1998. Flexible workplace practices: Evidence from a nationally representative survey. Industrial and Labor Relations Review 52 (1): 99–115. Haltiwanger, John, and Steven Davis. 1991. Wage dispersion between and within
234
Sandra E. Black and Lisa M. Lynch
U.S. manufacturing plants, 1963–1986. Brookings Papers on Economic Activity, Microeconomics: 115–200. Hunter, Larry W., Annette Bernhardt, Katherine L. Hughes, and Eva Skuratowicz. 2001. It’s not just the ATMs: Technology, firm strategies, jobs, and earnings in retail banking. Industrial and Labor Relations Review 54 (2): 402–24. Huselid, Mark A. 1995. The impact of human resource management practices on turnover, productivity, and corporate financial performance. Academy of Management Journal 38 (3): 635–72. Huselid, Mark A., and Brian E. Becker. 1996. High performance work systems and firm performance: Cross-sectional versus panel results. Industrial Relations 35: 400–22. Ichniowski, Casey. 1990. Human resource management systems and the performance of U.S. manufacturing businesses. NBER Working Paper no. 3449. Cambridge, MA: National Bureau of Economic Research, September. Ichniowski, Casey, and Kathryn Shaw. 2003. Beyond incentive pay: Insiders’ estimates of the value of complementary human resource management practices. The Journal of Economic Perspectives 17 (1): 155–78. Ichniowski, Casey, Kathryn Shaw, and Gabrielle Prennushi. 1997. The effects of human resource management practices on productivity. American Economic Review 87 (3): 291–313. Kandel, E., and Edward Lazear. 1992. Peer pressure and partnerships. Journal of Political Economy 100:801–17. Kelley, Maryellen. 1994. Information technology and productivity: The elusive connection. Management Science 40:1406–25. ———. 1996. Participative bureaucracy and productivity in the machined products sector. Industrial Relations 35:374–99. Kremer, Michael, and Eric Maskin. 1996. Wage inequality and segregation by skill. NBER Working Paper no. 5718. Cambridge, MA: National Bureau of Economic Research, August. Kruse, Douglas, and Joseph Blasi. 1998. The new employee/employer relationship. Rutgers University, Aspen Institute Domestic Strategy group. Mimeograph, August. Lynch, Lisa M., and Sandra E. Black. 1998. Beyond the incidence of training: Evidence from a national employers’ survey. Industrial and Labor Relations Review 52 (1): 64–81. Malcomson, James. 1983. Trade unions and economic efficiency. The Economic Journal 93:50–65. Milgrom, P., and J. Roberts. 1995. Complementarities and fit: Strategy, structure, and organizational change in manufacturing. Journal of Accounting and Economics 19:179–208. Osterman, Paul. 1994. How common is workplace transformation and who adopts it? Industrial and Labor Relations Review 47 (2): 173–87. ———. 2000. Work reorganization in an era of restructuring: Trends in diffusion and effects on employee welfare. Industrial and Labor Relations Review 53 (2): 179–96.
Measuring Organizational Capital in the New Economy
Comment
235
Kathryn Shaw
There is no doubt that organizational capital contributes to the value of some firms. The human resource practices—such as teams, training, and work design—that form organizational capital have been shown to raise productivity and to raise stock market value. Moreover, the case study evidence illuminating the value of human resource practices is persuasive. Therefore, it is time that we seek to quantify and understand how organizational capital raises productivity or market value for firms in the future. In all likelihood, firms’ investments in physical capital are declining and their investments in organizational capital are rising. If we don’t measure these intangible investments, we will not be able to explain or predict economic growth. Therefore, Black and Lynch examine the potential ways in which we might measure organizational capital. They emphasize several criteria to be used in developing the measures of organizational practices: first, we want to develop survey questions that have a reasonable response rate, so the data are fairly complete; second, we want variables that the survey respondents understand and can readily answer; and third, we want variables that have some variance across firms so there is real information in the data. The authors focus on the use of the EQW data to assess each of these requirements for the extensive variables that were developed for that survey. In addition, we also want to measure variables, such as teamwork, that have been shown to have an impact on performance, so the authors review the human resource (HR) productivity or HR performance literature. These are all excellent points. As for extending their analysis in the future, I have several suggestions. 1. One goal in measuring organizational capital is to add organizational variables to standard surveys, such as census surveys, so that we can really develop a time series on these investments. In this case, we must isolate a few key variables that might serve as proxies for a longer list of variables. Using the Black and Lynch definition of organizational capital, we would wish to find key measures for the three categories of teamwork, training, and job design (and I would add forms of incentive pay). Black and Lynch give us some guidelines for uncovering such variables. In addition to their suggestions, we should do two further things. First, we would want to ask if there are HR practices that serve as the key variables in systems of complementary HR practices. For example, some firms may have “teambased” systems, with all other HR practices aimed at supporting the teams. Kathryn Shaw is the Ernest C. Arbuckle Professor of Economics in the Graduate School of Business, Stanford University, and a research associate of the National Bureau of Economic Research.
236
Sandra E. Black and Lisa M. Lynch
It is likely that systems of HR practices vary across industries, technologies, and firms, but we should examine the data and literature with this question of identifying HR systems in mind. Second, we want to look at a correlation matrix of the HR practices, or undertake factor analysis, to isolate those key HR measures. We could do this by industry or by firm size. 2. What are firms doing differently when they invest in new organizational practices—how does their production process change? In addition to measuring the practices, we might measure the changes that they produce. For example, in work on the impact of HR practices on performance, we create a variable called “connective capital” (Ichniowski and Shaw 2004). Connective capital is the amount of communication links within the work area, where high communications tend to represent greater amounts of local problem solving by workers (rather than by engineers or managers). Connective capital is correlated with innovative HR practices, such as teamwork, incentive pay, information sharing, extensive training, and careful hiring. 3. We should look to the literature to see if some HR practices seem to raise the level of output when they are adopted, and if other practices continually raise the growth rate of output. Both matter to our models of economic growth, but in different ways. 4. Finally, although good measures of organizational practices may predict economic growth, we also want to know which types of firms will gain the most from these practices. Knowledge regarding which firms invest will help us make policy statements, both on the likely growth of potential output and also on possible ways of subsidizing these investments (such as training) if such subsidies are warranted because there are externalities with organizational investments that firms can’t internalize. For example, if organizational investments require a workforce with high levels of general skills, then we should examine options for government subsidies for these skills, like forms of education. We have very little empirical knowledge of the adoption of practices, and yet clearly this knowledge of adoption practices is essential and may be facilitated by these more extensive data on organizational practices. Overall, Black and Lynch suggest important ways of measuring organizational capital, and in the long run, measures of organizational capital could be very important in both predicting and creating economic growth. References Ichniowski, Casey, and Kathryn Shaw. 2004. Connective capital: Building problemsolving networks within firms. Stanford University, Graduate School of Business. Working Paper, January.
7 Pharmaceutical Knowledge-Capital Accumulation and Longevity Frank R. Lichtenberg
People value leisure time as well as goods, so longevity increase is an important part of economic growth, broadly defined. Research and development (R&D) is the principal source of economic growth, and the pharmaceutical industry is the most R&D-intensive sector of the economy. In this paper we assess the contribution of pharmaceutical R&D to longevity increase (hence to economic growth) by analyzing the relationship between Food and Drug Administration (FDA) approvals of new molecular entities and changes in the age distribution of deaths from all diseases, using longitudinal disease-level data. Until the middle of the twentieth century, analyses of long-run macroeconomic performance were based on an aggregate production function of the form (1)
Y F(K, N),
where Y is real GDP (the market value of goods and services produced), K is capital input, and N is labor input. Capital is, of course, a produced means of production, which accumulates according to the perpetual inventory equation Kt1 (1 – )Kt It , where is the depreciation rate and I denotes investment. In the last fifty or sixty years, economists have recognized the inadequacies of this production function—its failure to account for important aspects of observed macroeconomic behavior—and have modified and Frank R. Lichtenberg is the Courtney C. Brown Professor of Business at Columbia University Graduate School of Business and a research associate of the National Bureau of Economic Research.
237
238
Frank R. Lichtenberg
extended it in several ways. The most important modifications have been expansions of the sets of both inputs and outputs accounted for. Beginning in the 1950s, economists realized that some, if not most, of the growth in output could not be accounted for by growth in capital and labor. Some output growth was due to total factor productivity (TFP) growth—growth in output per unit of total input—which is not accounted for in equation (1). Growth in TFP was hypothesized to be due to technological progress. The production function could easily be modified to allow for the existence of technological progress: (2)
Y AF(K, N),
where A is an index of the level of technology. Solow (1956) demonstrated that, in the long run, the growth rate of per capita output is equal to the rate of technological progress (the growth rate of A).1 In that paper, Solow assumed that technological progress was exogenous: it descends upon the economy like “manna from heaven,” automatically and regardless of whatever else is going on in the economy (Jones 1998, pp. 32–33). But subsequent investigators have hypothesized and provided evidence that productivity growth and technological progress are endogenous—determined by investment in R&D. The dependence of technical progress on R&D is a key feature of recent theoretical (“endogenous growth”) models (Romer 1990). Griliches (1979) proposed the following model to incorporate endogenous (R&D-generated) technical change into the production function: (3)
Y F(K, N, Z),
where Z is the stock of “knowledge capital.” Like physical capital, knowledge capital is a produced means of production, which accumulates according to the perpetual inventory equation Zt1 (1 – Z )Zt RDt , where z is the knowledge-capital depreciation rate and RD denotes R&D investment. There are two ways in which one can use equation (3) to assess the contribution of knowledge capital to productivity growth. One is to examine the relationship (e.g., across industries) between TFP growth and the growth of Z. The other is to examine the relationship between TFP growth
1. As noted by Jones (1998, pp. 31 and 34), “there is no per capita income growth in the [basic Solow] model” (i.e., the model with no technological progress); “technological progress is the source of sustained per capita growth” (emphasis added). Empirical evidence about the relative contribution of TFP growth to output growth has been mixed. Solow estimated that about 90 percent of actual output growth was due to TFP growth; in 1967, Jorgenson and Griliches (1967) estimated that only about 10 percent was due to TFP growth; in 1973, Christensen and Jorgenson (1973) estimated that about half of actual output growth was due to TFP growth.
Pharmaceutical Knowledge-Capital Accumulation and Longevity
239
and “R&D intensity” (the ratio of R&D investment to output).2 Under certain reasonable assumptions, the R&D intensity coefficient in the TFP regression is an estimate of the marginal product of knowledge capital and of the rate of return to investment in R&D.3 Numerous empirical studies (e.g., Griliches and Lichtenberg 1984; Lichtenberg and Siegel 1991) have provided strong support for the hypothesis that R&D has contributed significantly to growth in the market value of goods and services produced. But economists believe that the utility, or welfare, of individuals and nations depends not only on the goods and services they consume but also on the amount of (leisure) time they have. Leisure time as well as goods is an argument of the utility function. Becker defined an individual’s “full income” as the value of goods consumed plus the value of leisure time “consumed.” Let us define (4)
Y ∗ G(Y, L),
where Y ∗ is “full income” (or utility) and L is leisure time. A simple linear approximation of this function is Y ∗ Y pL L , where pL is the shadow price of leisure time (relative to the price of goods). Suppose, for simplicity, that pL remains constant over time. Then Y ∗ Y pLL. The change in full income is the change in gross domestic product (GDP) plus the change in the value of leisure time consumed. During the last century, longevity increase has been an important source of increase in the average person’s leisure time over the course of the life cycle. Nordhaus (2003) estimated that “to a first approximation, the economic value of increases in longevity over the twentieth century is about as large as the value of measured growth in nonhealth goods and services.” In other words, his estimates imply that Y pLL. Due to the importance of leisure time in general, and longevity in particular, to economic well-being, we propose replacing GDP in the production function by “full income”: (5)
Y ∗ G(Y, L) F(K, N, Z ).
We hypothesize that R&D-generated increases in the stock of knowledge capital (Z ) may have a positive impact on both components of full income: leisure time (via longevity) and consumption of goods and services. 2. The second approach does not require a long history of R&D investment or an estimate of the initial knowledge-capital stock. 3. Since capital and labor engaged in R&D are already included in K and N—they are “double counted”—the R&D intensity coefficient is an estimate of the excess return to R&D—the difference between the return to R&D and the return on ordinary investment.
240
Frank R. Lichtenberg
According to the National Science Foundation (NSF), in 1996 16 percent of U.S. R&D was associated primarily with the life sciences; this share increased from 12 percent in 1985. In the next section we discuss the measurement of pharmaceutical knowledge-capital accumulation. In section 7.2 we postulate an econometric model of the effect of pharmaceutical knowledge-capital accumulation on the age distribution of deaths. Measurement of changes in the age distribution of deaths, by cause of death, is discussed in section 7.3. Empirical results are reported in section 7.4, and section 7.5 presents a summary and conclusions. 7.1 Measurement of Pharmaceutical Knowledge-Capital Accumulation The basic hypothesis we wish to investigate is that pharmaceutical R&D investment has increased the longevity of Americans, as shown in panel A of figure 7.1. For a variety of reasons, however, we didn’t think that the best way to test this hypothesis is to perform an econometric analysis of the relationship between pharmaceutical R&D investment and longevity. There are two other indicators of pharmaceutical R&D investment that are potentially more fruitful to analyze than pharmaceutical R&D investment itself: pharmaceutical patents and FDA new drug approvals. We will argue that pharmaceutical patents are subject to most of the same econometric limitations as pharmaceutical R&D investment but that FDA new drug approval data provide an excellent opportunity to (indirectly) examine the R&D-longevity relationship. New drug approvals by the FDA may be interpreted as an “intermediate good” in the R&D-longevity relationship, as shown in panel B of figure 7.1.4 To explain the relationship between R&D investment and new drug approvals, and why the latter is a superior indicator for explaining changes in longevity, it is useful to briefly describe the process of drug development. The FDA’s depiction of the new drug development timeline is shown in figure 7.2. There are three main phases of drug development up until the time of new drug approval. The first phase is preclinical testing and R&D, including testing in animals. According to the FDA, the average duration of this phase is eighteen months. In order to proceed to the second stage, the drug sponsor must submit, and receive FDA approval of, an investigational new drug (IND) application. Upon approval of the IND, the sponsor may begin clinical R&D (human trials). As table 7.1 indicates, there are three phases of clinical R&D. According to the FDA, the average duration of the three phases combined is five years. 4. “FDA estimates that, on average, it takes eight-and-a-half years to study and test a new drug before the agency can approve it for the general public. That includes early laboratory and animal testing, as well as later clinical trials using human subjects. Drug companies spend $359 million, on average, to develop a new drug, according to a 1993 report by the Congressional Office of Technology Assessment” (FDA 1999, p. 15).
Pharmaceutical Knowledge-Capital Accumulation and Longevity
241
Fig. 7.1 A, Pharmaceutical R&D investment and longevity; B, Pharmaceutical R&D investment, FDA new drug approvals, and longevity
After completing clinical R&D, the drug sponsor can submit a new drug application (NDA) to the FDA. For decades, the regulation and control of new drugs in the United States has been based on the NDA. Since 1938, every new drug has been the subject of an approved NDA before U.S. commercialization. The data gathered during the animal studies and human clinical trials of an IND become part of the NDA. According to the FDA, the average duration of the NDA review process is two years. The FDA says that of 100 drugs for which IND applications are submitted, about 70 percent will successfully complete phase 1 and go on to phase 2; about 33 percent of the original 100 will complete phase 2 and go to phase 3; and 25 to 30 of the original 100 will clear phase 3 (and, on average, about 20 of the original 100 will ultimately be approved for marketing). This is consistent with 1990–2001 data on the number of commercial5 INDs received and NDAs received and approved, shown in figure 7.3. The average annual number of NDAs approved (eighty-five) was 21 percent of the average annual number of INDs received (403). As table 7.2 shows, there are seven different kinds of new drug applications. New molecular entities (NMEs) account for only about a third of all new drug approvals, but they probably account for the vast majority of pharmaceutical R&D expenditure,6 and they are the NDAs that are most likely to increase longevity.7 DiMasi (2001) argues that mean drug development time has increased 5. “Commercial INDs” are applications that are submitted primarily by companies whose ultimate goal is to obtain marketing approval for a new product. There is another class of filings broadly known as “noncommercial” INDs. The vast majority of INDs are, in fact, filed for noncommercial research. These types of INDs include “Investigator INDs,” “Emergency Use INDs,” and “Treatment INDs.” 6. Cross-sectional firm-level estimates support this hypothesis. When we compute a (“reverse”) regression of a firm’s average annual R&D expenditure on its average annual number of NDA approvals, by type, the number of NMEs is positive and highly significant, and the number of other NDAs is not significantly different from zero. 7. Forty-two percent of the NMEs approved during 1990–2001 were “priority-review approvals”—that is, considered by the FDA to represent “significant improvement compared to marketed products, in the treatment, diagnosis, or prevention of a disease.” Only 14 percent of non-NME NDAs approved were priority-review approvals.
Fig. 7.2
New drug development timeline
Table 7.1
Phases of clinical research
Phase
No. of patients
Length
Purpose
1 2
20–100 Up to several hundred
3
Several hundred to several thousand
Several months Several months to 2 years 1–4 years
Mainly safety Some short-term safety, but mainly effectiveness Safety, effectiveness, dosage
Fig. 7.3
Percent of NDAs approved
a
70 33 25–30
Commercial INDs received and NDAs received and approved, 1990–2001
Table 7.2
46 35 10 6 2 1 1
Percent of drugs successfully tested
Types of new drug applications (NDAs), 1990–2001
NDA type New formulation New molecular entity New manufacturer New combination New ester, new salt, or other noncovalent derivative New indicationa Drug already marketed but without an approved NDA
Beginning in 1994, new indications were tracked as efficacy supplements, not as NDAs.
244
Frank R. Lichtenberg
sharply since the 1960s (see figure 7.4). His figures indicate that, for the last twenty years, mean drug development time has been 14.2 years, substantially longer than the FDA’s estimate of 8.5 years. Pharmaceutical Research and Manufacturers of America (PhRMA) provides statistics, based on its annual survey of pharmaceutical firms, on the distribution of 1999 pharmaceutical R&D expenditure by function. These are shown in table 7.3. These figures suggest that as much as 36 percent of R&D expenditure occurs during the preclinical phase of drug development, which is, on average (according to DiMasi’s estimates), about eight years before NDA approval. Another 29 percent of R&D expendi-
Fig. 7.4
Average number of years for drug development, 1960s to 1990s
Source: DiMasi (2001).
Table 7.3 Percent of R&D expenditure 10.0 14.2 4.5 7.3 29.1 11.7 8.3 4.1 1.8 9.0
Distribution of 1999 pharmaceutical R&D expenditure, by function
Function Synthesis and extraction Biological screening and pharmacological testing Toxicology and safety testing Pharmaceutical dosage formulation and stability testing Clinical evaluation: Phases I, II, and III Clinical evaluation: Phase IV Process development for manufacturing and quality control Regulatory: IND and NDA Bioavailability Other
Pharmaceutical Knowledge-Capital Accumulation and Longevity
245
Fig. 7.5 Estimates of aggregate pharmaceutical industry R&D from NSF and PhRMA surveys
ture occurs during the clinical phase—which is, on average, about five years—before NDA approval. On average, then, the lag from R&D expenditure to new drug approval appears to be quite long and quite variable. Long and variable lags are but one of the obstacles to a direct examination of the R&D-longevity relationship. There are several others: apparently inconsistent estimates of pharmaceutical R&D investment, smoothness of the aggregate time series R&D data, and lack of disaggregated data. We discuss these in turn. 7.1.1 Divergent Estimates There are two distinct surveys that provide data on the amount of pharmaceutical industry R&D: the National Science Foundation (NSF) Survey of Industrial Research and Development,8 and the PhRMA annual survey of research-based pharmaceutical companies. Figure 7.5 shows estimates of aggregate pharmaceutical industry R&D from the two surveys. Before 1990, the estimates differed by less than 10 percent, but in 1996 and 1997 the estimates differed by about 30 percent. 7.1.2 Smoothness To identify the effect of pharmaceutical R&D investment on longevity, significant variability in R&D investment is required. As figure 7.5 suggests, 8. This survey is administered by the Census Bureau.
246
Frank R. Lichtenberg
Table 7.4
Distribution of pharmaceutical R&D expenditure, by drug class, 1997–99 (%)
Drug class Acting on the central nervous system and sense organs Affecting neoplasms, endocrine system, and metabolic diseases Acting on infective and parasitic diseases Acting on the cardiovascular system Acting on the respiratory system Other human use Biologicals Acting on the digestive or genitourinary system Diagnostic agents Acting on the skin Vitamins and nutrients
1997
1998
1999
Average
29
30
23
27
24 22 17 6 0 0 0 2 0 0
23 22 17 4 0 3 0 1 0 0
24 14 15 4 10 5 4 0 1 0
23 19 16 5 3 3 1 1 0 0
aggregate pharmaceutical R&D investment is very closely approximated by an exponential trend; that is, it exhibits very little variability.9 This is not surprising: R&D in general is known to be very persistent, especially in comparison with ordinary investment. 7.1.3 Lack of Disaggregated Data In principle, variability of R&D investment could be increased via disaggregation—for example, by class of drugs. Unfortunately, the NSF survey does not provide any disaggregated pharmaceutical R&D investment data. The PhRMA survey does, but as table 7.4 indicates, the drug classes are quite broad; 85 percent of investment during 1997–99 was in the largest four classes. 7.1.4 Are patent data likely to be useful? R&D and patenting are known to be closely related (Griliches 1984). Perhaps patent data could supersede most of the limitations of the R&D data. The U.S. Patent and Trademark Office (USPTO) publishes data on the number of patents granted for “drug, bio-affecting and body treating compositions” (patent class 514). Figure 7.6 presents annual data on the number of “drug patents” (patents in class 514) and total patents granted from 1980 to 2000. Drug patents do exhibit somewhat more variability than R&D expenditure.10 However, drug patents track total patents quite closely. Disaggregation of drug patents by therapeutic action appears to be in9. The R2 of the regression of the PhRMA R&D series on an exponential trend is .9913. 10. The R2 of the regression of the drug patent series on an exponential trend is .8897.
Pharmaceutical Knowledge-Capital Accumulation and Longevity
Fig. 7.6
247
U.S. drug patents granted and total patents granted, 1980 to 2000
feasible. Although certain subclasses of class 514 pertain to specific diseases (e.g., subclass 866 refers to diabetes, and subclass 883 refers to Hodgkin’s disease),11 these subclasses are “cross-reference art collections,” and drug patents are not systematically classified by disease or therapeutic action.12 It appears that drug patents are often granted fairly early in the drug development cycle. According to PhRMA, “the average period of effective patent life (when a drug can be marketed) for new drugs introduced in the early to mid-1990s with patent-term restoration has been only 11–12 years. Innovators in other industries typically receive upwards of 18.5 years of effective patent life.”13 This suggests that, on average, patents are granted at least seven years prior to the market introduction of new drugs. Long and variable lags diminish the likelihood that drug patents can explain fluctuations in longevity. Data on both pharmaceutical R&D expenditure and pharmaceutical patents are too aggregated, exhibit too little variability, and are too subject 11. See http://www.uspto.gov/go/classification/uspc514/sched514.htm. 12. The seventh (1999) edition of the International Patent Classification system appears to provide (in class A61P) a systematic classification of chemical compounds and medicinal preparations by therapeutic activity. For example, subclass 1/00 includes drugs for disorders of the alimentary tract or the digestive system, and subclass 1/18 covers drugs for pancreatic disorders (e.g., pancreative enzymes). (See http://www.wipo.int/classifications/fulltext/new _ipc/index.htm.) 13. See http://www.phrma.org/publications/publications/profile01/chapter8.phtml.
248
Frank R. Lichtenberg
to excessively long lags to serve as a basis for testing our key hypothesis. But these limitations may be overcome by combining data from two different sources: First DataBank’s National Drug Data File (NDDF)14 and FDA data on NDA approvals.15 These data sources enable us to compute the stock of drugs available (i.e., previously approved by the FDA) to treat a given condition in a given year. The NDDF consists of a number of modules. One of these is the indications module, “the goal of [which] is to minimize the risks associated with drug use. The information in this module is intended to be used as a tool for assessing the appropriateness of drug therapy” (First DataBank 1998). We utilize just one part of the clinical module: the Drug Indications Master Table. This table links indications (diseases) to drugs (active ingredients): it lists all of the drugs appropriate for treatment of each indication. Indications are coded using the International Classification of Diseases, ninth revision (ICD9; http://www.cdc.gov/nchs/about/major/dvs/icd9des .htm),16 the same classification system used in the mortality data we will analyze.17 Sample data from the NDDF Drug Indications Master Table, for two indications—tuberculosis and hypercholesterolemia—are shown in table 7.5.18 The table lists eleven drugs appropriate for the treatment of tuberculosis and fourteen drugs appropriate for the treatment of hypercholesterolemia. All of the tuberculosis drugs are designated as “labeled,” but 14. First DataBank, a wholly owned subsidiary of the Hearst Corporation, is a leading provider of electronic drug information. For more than two decades, it has delivered knowledge bases for various health care applications, including clinical decision support within the work flow. Its portfolio also includes comprehensive reference products, integrated content software, and specialty software for physicians and nutritionists. Many of these products help reduce the incidence of medication errors and adverse drug events, which can result in shorter hospital stays, lower medical costs, and improved patient care. The NDDF Plus knowledge base combines the drug information of the National Drug Data File with advanced clinical decision-support modules, to deliver complete descriptive, pricing, and clinical information for every drug approved by the FDA. Their staff includes clinicians, software engineers, and knowledge-base experts. It is found in installations ranging from retail pharmacies to hospital pharmacies and laboratories; physician and other health care professional practices; and e-health care companies, managed care organizations, and insurers. 15. Section 505 of the Federal Food, Drug, and Cosmetic Act states that “no person shall introduce or deliver for introduction into interstate commerce any new drug, unless an approval of an application . . . is effective with respect to such drug.” 16. The International Classification of Diseases (ICD) is designed for the classification of Morbidity and Mortality information for statistical purposes, and for the indexing of hospital records by disease and operations, for data storage and retrieval. The ICD is developed collaboratively between the World Health Organization (WHO) and ten international centers, for purposes of ensuring that medical terms reported on death certificates are internationally comparable and lend themselves to statistical analysis. The ICD has been revised approximately every ten years since 1900 in order to reflect changes in understanding of disease mechanisms and in disease terminology. 17. Information about drugs appropriate for treatment of specific indications can be obtained on a piecemeal basis from http://www.medscape.com. 18. The complete Drug Indications Master Table contains almost 7,000 links between indications and drugs.
Pharmaceutical Knowledge-Capital Accumulation and Longevity Table 7.5
249
Sample data for two indications from National Drug Data File (NDDF) Drug Indications Master Table, by International Classification of Diseases, Ninth Revision (ICD9) code
ICD9 code
Indication
0119 0119 0119 0119 0119 0119 0119 0119 0119 0119 0119
Tuberculosis Tuberculosis Tuberculosis Tuberculosis Tuberculosis Tuberculosis Tuberculosis Tuberculosis Tuberculosis Tuberculosis Tuberculosis, Pulmonary
Capreomycin Isoniazid Cycloserine Ethambutal Ethionamide Aminosalicyate Sodium Acetylcysteine (INH) Pyrazinamide Rifampin Rifampin and Isoniazid Rifapentine
L L L L L L L L L L L
272 272 272 272 272 272 272 272 272 272 272 272 272 272
Hypercholesterolemia Hypercholesterolemia Hypercholesterolemia Hypercholesterolemia Hypercholesterolemia Hypercholesterolemia Hypercholesterolemia Hypercholesterolemia Hypercholesterolemia Hypercholesterolemia Hypercholesterolemia Hypercholesterolemia Hypercholesterolemia Hypercholesterolemia
Lovastatin Pravastatin Simvastatin Cholestyramine Colestipol Probucol Fluvastatin Atorvastatin Niacin (SA-Lipotropic) Cerivastatin Garlic Psyllium, Bran Neomycin Conj. estrogen, M-Progesterone
L L L L L L L L L L L U U U
Drug
Labeled or unlabeled
three of the hypercholesterolemia drugs are designated as “unlabeled.”19 According to the American Medical Association’s Council on Scientific Affairs, Unlabeled uses are defined as the use of a drug product for indications or in patient populations, doses, or routes of administration that are not included in FDA-approved labeling. The prevalence and clinical importance of prescribing drugs for unlabeled uses are substantial. Unlabeled indications are especially common in oncology, rare diseases, and pediatrics. Thus, the prescribing of drugs for unlabeled uses is often necessary for optimal patient care.20
19. About 25 percent of the almost 7,000 entries in the Drug Indications Master Table are designated “unlabeled.” 20. See http://www.ama-assn.org/ama/pub/article/2036-2420.html. See also Cranston et al. (1998).
250
Frank R. Lichtenberg
We will construct estimates of the stock of drugs available to treat specific conditions, both excluding and including unlabeled indications. The NDDF Drug Indications Master Table lists all of the drugs appropriate for treating given conditions that were available in the year 1999.21 We want to determine the number of drugs appropriate for treating given conditions that were available in each of the years 1979–98 (the years for which we have mortality data). To determine this, we identified, from published and unpublished FDA data, the year in which each of the drugs listed in the NDDF Drug Indications Master Table was first approved as an NME by the FDA. The FDA provided us with a list of all 821 NMEs approved by the FDA during the period 1950–93. We extended this list through 1998 using another unpublished FDA data file and data posted on the FDA website. The FDA data on NME approvals are illustrated in table 7.6, which shows NMEs approved in calendar year 2000. We aggregated the data in the NDDF Drug Indications Master Table up to the (approximately) two-digit ICD9 level, to be consistent with the Centers for Disease Control (CDC) Mortality Data (described below). There is considerable variation across diseases—even diseases in the same broad disease groups—in the extent and timing of increases in the stock of available drugs. This is illustrated by figure 7.7, which shows, for two conditions—diseases of the thyroid gland and diseases of other endocrine glands—the number of drugs available to treat the condition in year t, as a percent of the number of drugs available to treat the condition in 1979.22 Between 1979 and 1984, the number of drugs available to treat diseases of the thyroid gland increased 29 percent, while the number of drugs available to treat diseases of other endocrine glands increased only 13 percent. However, between 1984 and 1998, the number of drugs available to treat diseases of the thyroid gland did not increase at all, while the number of drugs available to treat diseases of other endocrine glands increased 33 percent. The algorithm we adopted is based on the assumption that a drug linked to a condition in the NDDF Drug Indications Master Table became available to treat the condition in the year that the drug was first approved as an NME by the FDA. We know that this assumption is incorrect in at least some cases, because some of a drug’s indications may be added years after the drug was first approved as an NME. Table 7.7 provides examples of New Indication approvals, and the predecessor NME approvals. Amantadine hydrochloride was initially approved as an NME in 1966 and designated as an antiviral/anti-influenza/systemic drug. Seven years later, a new indication of the drug was approved by the FDA, and it was also classified as an anti-Parkinson drug. Unfortunately, although we have complete data on NME approvals, data on New Indication approvals are incomplete. 21. Unfortunately, earlier versions of the NDDF Drug Indications Master Table are not available. 22. In 1979, there were seven drugs for treating diseases of the thyroid gland and thirtyeight drugs for treating diseases of other endocrine glands.
20-789 20-971 21-119 20-938 21-130 21-081 20-823 21-174 21-176 20-986 20-715 20-883 20-484 20-610 20-941 21-214 21-197 21-226 21-248 20-687 20-873 21-204
20-989 21-014 20-987 21-107 21-084
NDA number
Table 7.6
Cevimeline HCl Oxcarbazepine Pantoprazole Sodium Alosetron HCl Perfluoroalkylpolyether (PFPE) Polytetrafluoroethylene (PTFE) Zonisamide Articaine HCl 4%; Epinephrine Verteporfin Meloxicam Linezolid Insulin Glargine Rivastigmine Tartrate Gemtuzumab Ozogamicin Colesevelam HCl Insulin Aspart Recombinant Triptorelin Pamoate Argatroban Tinzaparin Sodium Balsalazide Disodium Docosanol Unoprostone Isopropyl Cetrorelix Acetate Lopinavir; Ritonavir Arsenic Trioxide Mifepristone Bivalirudin Nateglinide
Generic name Evoxac Trileptal Protonix Lotronex Skin Exposure Reduction Paste Zonegran Septocaine Visudyne Mobic Zyvox Lantus Exelon Mylotarg Welchol NovoLog Trelstar Depot Acova Innohep Colazal Abreva Rescula Cetrotide Kaletra Trisenox Mifeprex Angiomax Starlix
Trade name
Capsule Injectable Injectable Tablet Tablet Injectable Capsule Injectable Tablet Injectable Injectable Injectable Injectable Capsule Cream Solution Injectable Capsule Injectable Tablet Injectable Tablet
Capsule Tablet Tablet Tablet Paste
Dosage form
New Molecular Entities (NMEs) approved in calendar year 2000
Snowbrand Novartis Pharms Wyeth Ayerst Glaxo Wellcome U.S. Army Med Res and Material Command Elan Pharms Deproco QLT PhotoTherapeutics Boehringer Pharms Pharmacia and Upjohn Aventis Pharms Novartis Pharms Wyeth Ayerst GelTex Novo Nordisk Debio Recherche Texas Biotech Dupont Pharms Salix Pharms Avanir Pharm Ciba Vision Asta Medica Abbott Labs Cell Therapeutics Population Council The Medicines Co. Novartis
Applicant
1S 1,4S 1P 1S 1P 1S 1S 1PV 1S 1S 1S 1S 1S 1S 1S 1P 1S 1,4P 1PV 1P 1S 1S
1S 1S 1S 1P 1,4P
Classification
3/27/2000 4/3/2000 4/12/2000 4/13/2000 4/18/2000 4/20/2000 4/21/2000 5/17/2000 5/26/2000 6/7/2000 6/15/2000 6/30/2000 7/14/2000 7/18/2000 7/25/2000 8/3/2000 8/11/2000 9/15/2000 9/25/2000 9/28/2000 12/15/2000 12/22/2000
1/11/2000 1/14/2000 2/2/2000 2/9/2000 2/17/2000
Approval date
Fig. 7.7 Number of drugs available to treat condition in year t, as percentage of number of drugs available to treat condition in 1979
Table 7.7
Examples of NDA approvals of new indications for existing drugs
Ingredient name
Chemical type
Approval date
Amantadine Hydrochloride Clotrimazole
New Molecular Entity New Indication New Molecular Entity New Indication New Molecular Entity New Indication New Molecular Entity New Indication New Molecular Entity New Indication New Molecular Entity New Indication New Molecular Entity New Indication New Molecular Entity New Indication New Molecular Entity New Indication New Molecular Entity New Indication
18-Oct-66 18-Apr-73 3-Feb-75 29-Jul-96 20-Jun-73 3-Jan-97 14-Nov-83 22-May-97 17-Oct-61 18-Sep-69 29-Dec-87 28-Feb-94 14-Dec-90 7-Nov-97 11-Aug-61 6-Feb-75 29-Dec-89 18-Dec-95 10-May-95 17-Jun-97
New Molecular Entity New Indication
11-Jul-61 31-Jan-75
Cromolyn Sodium Cyclosporine Cyproheptadine Hydrochloride Fluoxetine Hydrochloride Fluticasone Propionate Glycopyrrolate Goserelin Acetate Lansoprazole
Mebutamate
Therapeutic class Antiviral; anti-Influenza; systemic Anti-Parkinson drugs Fungicides (topical) Antifungal (Candidiasis) Bronchodilator Respiratory Immunomodulators NSAID Antihistamine/oral Appetite stimulation Antidepressants Obsessive Compulsive Disorder Steroids Respiratory Miscellaneous Upper GI drugs Anticholinergic agent GNRH agonists Antineoplastic hormones Proton pump inhibitors Systemic antibiotics— H. Pylori indication Anti-hypertensive agents Sedatives and hypnotics
Pharmaceutical Knowledge-Capital Accumulation and Longevity
253
Even if complete data on New Indication approvals by the FDA were available, in light of extensive unlabeled drug use, it is not clear how they should be used. A drug approved as an NME might be frequently prescribed for many years for a condition that is “off-label.” When the FDA receives an NDA, it assesses the drug’s “therapeutic potential” and classifies it as either a “Priority Review” drug—one that represents a “significant improvement compared to marketed products, in the treatment, diagnosis, or prevention of a disease”—or a “Standard Review” drug—one that “appears to have therapeutic qualities similar to those of one or more already marketed drugs.”23 Two diseases that have similar increases in the total number of NMEs approved may have quite different increases in the number of priority review NMEs approved. For example, sixteen drugs for treating syphilis were approved during 1979–98, but only five (31 percent) of these were priority review drugs. The total number of drugs approved for treating lymph cancer was lower—fourteen—but ten (71 percent) of these drugs were priority review drugs. In our empirical analysis, we will distinguish between the stock of priority review drugs available to treat a condition and the stock of standard review drugs available. 7.2 Model The basic model we will estimate is (6)
MORTit DRUG_STOCKit i t εit ,
where MORTit is an indicator of mortality (e.g., mean age at death) from ICD9 disease i (i 00, 01, . . . , 99)24 in year t (t 1979, 1980, . . . , 1998) and DRUG_STOCKit is the stock of drugs available to treat disease i in year t. The fixed disease effects (i s) control for any determinants of mortality that vary across diseases but do not vary over time. The year effects (ts) control for any determinants of mortality that vary over time but do not vary across diseases. If the estimate of is positive and significant, that indicates that diseases with above-average increases in the stock of drugs had above-average changes in the mortality indicator. We can allow for different effects of priority-review and standard-review drug approvals by estimating the more general model (7) MORTit P PRI_STOCK it S STD_STOCK it i t εit ,
23. Applications for new indications are also classified by therapeutic potential, and the therapeutic potential of a new indication may differ from the therapeutic potential of the NME. 24. The set of 100 diseases is exhaustive.
254
Frank R. Lichtenberg
where PRI_STOCK it is the stock of priority-review drugs available to treat disease i in year t and STD_STOCK it is the stock of standard-review drugs available to treat disease i in year t. We think it is worthwhile to briefly discuss how this model relates to the literature on endogenous technical change and on embodiment. 7.2.1 Endogenous Technical Change Model In Romer’s (1990) model of endogenous technological change, labor is used to produce either output or ideas: L LY LA, where L is total labor, LY is labor used to produce output, and LA is labor used to produce ideas. The production function for output is Y K (ALY )1 (0 1), where Y is output, K is capital, and A is stock of ideas. The production function for ideas is A A1 A L A A , where 0 1 and may be either positive or negative. The DRUG_STOCK variable corresponds to Romer’s “stock of ideas” variable (A). In the empirical analysis, we count only the “ideas” (NMEs) that have been approved by the FDA. While the model we will estimate is consistent with Romer’s embodied technical change model, there are other ways in which one might specify the drugs-mortality relationship. New products and ideas do not diffuse instantaneously throughout the economy or health care system. After new drugs are introduced, some people continue to use old drugs. Hence, measures of the vintage distribution of drugs used to treat a given disease in a given year might be preferable to a simple count of the number of drugs available to treat the disease. In practice, however, measurement of the vintage distribution of drugs used by disease and year is far more difficult than measurement of the number of drugs available by disease and year. Vintage data can be constructed from the National Ambulatory Medical Care Survey (NAMCS), a survey of physician office visits that collects data on patient diagnoses and drugs prescribed. Unfortunately, prior to 1989, it was conducted in only three years: 1980, 1981, and 1985.25 The NAMCS is an approximately 1-in-10,000 survey of office visits, so it is subject to considerable sampling error. More than one diagnosis is recorded in a significant number of visits; in these cases it is difficult to allocate or assign drugs to diseases. Drug vintages are subject to left-censoring: the vintages of drugs 25. It has been conducted annually since 1989.
Pharmaceutical Knowledge-Capital Accumulation and Longevity
255
that existed prior to the creation of the FDA in 1939 can’t be determined. Finally, interpretation of the coefficient on the number of drugs available is perhaps more straightforward than interpretation of the coefficient on the mean age of drugs used. 7.2.2 Embodiment Implicit in this specification is the hypothesis that the technical progress generated by pharmaceutical R&D is embodied in new drugs. Solow and other economists have recognized since the late 1950s that there are two kinds of technical progress: disembodied and embodied. Suppose that agent i in the economy (e.g., a firm or government agency) engages in R&D. If technical progress is disembodied, another agent ( j) can benefit from agent i’s R&D whether or not he purchases agent i’s products. But if technical progress is embodied, agent j benefits from agent i’s R&D only if he purchases agent i’s products. Solow conjectured that most technical progress was embodied. In one paper (Solow 1962, p. 76), he assumed that “all technological progress needs to be ‘embodied’ in newly produced capital goods before there can be any effect on output.” A number of econometric studies have investigated the hypothesis that capital equipment employed by U.S. manufacturing firms embodies technological change—that is, that “each successive vintage of investment is more productive than the last” (Bahk and Gort 1993, p. 566). Equipment is expected to embody significant technical progress due to the relatively high R&D intensity of equipment manufacturers. According to the NSF, the R&D intensity of machinery and equipment manufacturing is about 50 percent higher than the R&D intensity of manufacturing in general, and 78 percent higher than the R&D intensity of all industries. One method that has been used to test the equipment-embodied technical change hypothesis is to estimate manufacturing production functions, including (mean) vintage of equipment as well as quantities of capital and labor. Bahk and Gort (1993) argued that “we can take due account of the effect of vintage by measuring the average vintage of the stock” (p. 565). Similarly, Sakellaris and Wilson (2001) stated that “a standard production function estimation (in logs) provides an estimate of embodied technical change by dividing the coefficient on average age [of equipment] by the coefficient on capital stock” (capital’s share in total cost; p. 11). These studies have concluded that technical progress embodied in equipment is a major source of manufacturing productivity growth. Hulten (1992) found that as much as 20 percent (and perhaps more) of the Bureau of Labor Statistics (BLS) total factor productivity change (in manufacturing) can be directly associated with embodiment—the higher productivity of new capital than old capital. For equipment used in U.S. manufacturing, best-practice technology may be as much as 23 percent above the average level of technical efficiency.
256
Frank R. Lichtenberg
Bahk and Gort (1993) concluded that “Industrywide learning appears to be uniquely related to embodied technical change of physical capital. Once due account is taken of the latter variable, residual industrywide learning [disembodied technical change] disappears as a significant explanatory variable” (p. 579). And Sakellaris and Wilson (2001) estimate that “each vintage is about 12 percent more productive than the previous year’s vintage (in the preferred specification)” (p. 2) and that equipmentembodied technical change accounted for about two-thirds of U.S. manufacturing productivity growth between 1972 and 1996. Estimation of equations (6) and (7) enables us to test the pharmaceuticalembodied technical progress hypothesis—the hypothesis that newer drugs increase longevity—and to estimate the contribution of new drugs to longevity increase. One might be concerned that estimation of these equations could result in overestimation of the average longevity impact of pharmaceutical innovation. Suppose that the expected effect of a new drug on mean age at death is higher for some diseases than for others: instead of a single in equation (6), there is a distribution of i s. One might hypothesize that pharmaceutical companies would devote most of their research budgets to diseases where the expected effect of a new drug on mean age at death is highest, and therefore that most new drugs would be developed for such diseases. However a more rational investment strategy would be to invest heavily in diseases where the total (as opposed to average) expected increase in lifeyears is greatest. Suppose that the expected effect of a new drug on mean age at death from disease A is six months and that the expected effect of a new drug on mean age at death from disease B is one month. If ten times as many people suffer from disease B as from disease A, then the social (and presumably private) benefits to investment in disease B is higher, even though the benefit per patient is lower. Since firms will not necessarily invest more in diseases with high benefits per patient,26 it is not obvious that the estimate of will be an overestimate of the (weighted) mean of the i s (weighted by number of new drugs). 7.2.3 Other Medical Innovations New drugs are not the only type of medical innovation that might be hypothesized to contribute to longevity increase. Another important kind of innovation, and one that is also regulated by the FDA, is medical devices. If a company seeks to market a medical device, FDA approval of a premarket approval application (PMA) is required. Premarket approval by the FDA is the required process of scientific review to ensure the safety and 26. In 1983, Congress passed the Orphan Drug Act, in an attempt to encourage firms to develop drugs for the treatment of rare diseases (diseases borne by fewer than 200,000 Americans). See Lichtenberg and Waldfogel (2003).
Pharmaceutical Knowledge-Capital Accumulation and Longevity
257
Fig. 7.8 Number of original medical device PMAs reviewed by the FDA, 1981–2001
effectiveness of all devices classified as class III devices. The FDA maintains a PMA database (http://www.fda.gov/cdrh/pmapage.html). From this database, one can construct estimates of the number of PMA approvals, by various characteristics. Figure 7.8 shows the number of original PMAs reviewed by the FDA during the period 1981–2001. One characteristic is the identity of the advisory committee that has jurisdiction over the device. As table 7.8 indicates, there are nineteen advisory committees, but two committees account for more than half of all original PMAs.27 Moreover, it would be difficult to construct a “mapping” from PMAs classified by advisory committee to mortality data classified by ICD9 code. PMAs are also classified by “product code,” but the number of distinct product codes is extremely large (almost as large as the number of PMAs— over 5,000), they do not appear to be hierarchically organized, and it would be difficult to construct a mapping from product codes to ICD9 codes. (Unfortunately, neither First DataBank nor anyone else produces a Device Indications Master Table.)28 27. Over 90 percent of PMAs are “supplemental” PMAs: applications to modify the design, manufacturing, or other aspects of original PMAs. 28. The PMA database includes “device description/indications” information, but disease coding of this information would be challenging and costly. Here is sample information about PMA Number P010054, Approval for the Elecsys Anti-HBs Immunoassay and Elecsys PreciControl Anti-HBs: “The Elecsys Anti-HBs Immunoassay is indicated for: The qualitative determination of total antibodies to the hepatitis B surface antigen (HBsAg) in human serum and plasma (EDTA). The electrochemilumin-escence immunoassay ‘ECLIA’ is intended for use on the Roche Elecsys 2010 immunoassay analyzer. Assay results may be used as an aid in
258
Frank R. Lichtenberg
Table 7.8 No. of original PMAs 338 168 74 56 45 32 28 23 22 18 17 15 12 12 10 8 6 6 1
Distribution of medical device PMAs reviewed by the FDA, 1981–2001 Cumulative percentage of all original PMAs
Advisory committee
38 57 65 71 76 80 83 86 88 90 92 94 95 97 98 99 99 100 100
Ophthalmic Cardiovascular Microbiology General and plastic surgery Immunology Gastroenterology and urology Orthopedic Radiology Obstetrics/Gynecology General hospital Clinical chemistry Anesthesiology Ear, nose, and throat Dental Physical medicine Neurology Hematology Clinical toxicology Pathology
Hence, due to data limitations, the models we will estimate will include measures of only one type of medical innovation: new drugs. Suppose that, in reality, mean age at death depends on both new drugs and other medical innovations, but the model we estimate includes only the first variable. If the rates of pharmaceutical and other medical innovation are positively correlated across diseases—diseases with high rates of new drug introduction also have high rates of other medical innovations—then the drug coefficient will overstate the effect of new drugs per se. It is not clear that there is a positive correlation: new drugs and other medical innovations may be substitutes, rather than complements.29 But, for the sake of argument, suppose that the correlation is positive. Indeed, suppose that there is a perfect positive correlation and that the estimated regression coefficients the determination of susceptibility to hepatitis B virus (HBV) infection for individuals prior to or following HBV vaccination, or where vaccination status is unknown. Assay results may be used with other HBV serological markers for the laboratory diagnosis of HBV disease associated with HBV infection. A reactive assay result will allow a differential diagnosis in individuals displaying signs and symptoms of hepatitis in whom etiology is unknown. The detection of anti-HBs is indicative of laboratory diagnosis of seroconversion from hepatitis B virus (HBV) infection. The Elecsys PreciControl Anti-HBs is indicated for: The preciControl Anti-HBs is used for quality control of the Elecsys Anti-HBs immunoassay on the Elecsys 2010 immunoassay analyzer. The performance of the PreciControl Anti-HBs has not been established with any other Anti-HBs assay.” 29. If the correlation is negative, we would underestimate the benefit of new drugs per se.
Pharmaceutical Knowledge-Capital Accumulation and Longevity
259
capture the effect of medical innovation in general, not just new drugs. Under certain reasonable assumptions, we can still identify the contribution of new drugs per se to ability to work. Suppose that the marginal health benefit of $1 million of pharmaceutical R&D is equal to the marginal health benefit of $1 million of other biomedical R&D. Then the fraction of the overall health benefits of biomedical research attributable to new drugs is equal to the ratio of pharmaceutical R&D expenditure to total biomedical R&D expenditure. According to the Global Forum for Health Research (2002), the pharmaceutical industry accounted for 42 percent of global health R&D funded by advanced and transition countries in 1998. If only 42 percent of the estimated effect of new drug approvals on mean age at death is attributable to new drugs (and the remaining 58 percent is attributable to other medical innovations), then we should multiply the benefit estimates we obtain by 42 percent. We will therefore calculate the social (internal) rate of return to new drug development under two alternative assumptions: (a) new drugs account for 100 percent of the estimated impact on mortality, and (b) new drugs account for 42 percent of the estimated impact on mortality. 7.3 Measurement of Changes in the Age Distribution of Deaths, by Cause of Death We used the Compressed Mortality File (CMF) to measure changes in the age distribution of deaths, by cause of death. The CMF is a countylevel national mortality and population data base spanning the years 1968–99, produced by the Office of Analysis and Epidemiology, National Center for Health Statistics, Centers for Disease Control and Prevention. The mortality database of the CMF is derived from the U.S. records of deaths that occurred in 1979–99. Deaths are classified by (a single) underlying cause in the CMF, as well as in other mortality databases, such as the World Health Organization’s Global Mortality Database.30 The person completing the death certificate is instructed to report, according to his or her best medical opinion, “the chain of events leading directly to death, proceeding from the immediate cause of death (the final disease, injury, or complication directly causing death) to the underlying cause of death (the disease or injury that initiated the chain of morbid events which led directly to death).”31 For example, 30. See http://www3.who.int/whosis/menu.cfm?pathwhosis.mort&languageenglish. Misallocation of deaths by cause would result in measurement error in the dependent variable. This would increase the standard error of but would not necessarily result in any bias. Since multiple cause-of-death data files are available for the United States (see http://www .cdc.gov/nchs/products/elec_prods/subject/mortmcd.htm#description1), this issue may be explored in future research. 31. See the CDC’s “Instructions for completing the cause-of-death section of the death certificate,” http://www.cdc.gov/nchs/data/dvs/cod.pdf.
260 Table 7.9
Frank R. Lichtenberg Sample cause-of-death section of death certificate Approximate interval between onset and death
Rupture of myocardium (immediate cause) Due to (or as a consequence of ): Acute myocardial infarction Due to (or as a consequence of ): Chronic ischemic heart disease (underlying cause)
minutes 6 days 5 years
part I of the cause-of-death section of the certificate might be completed as in table 7.9. The system used to classify deaths changed in 1979 and again in 1999. The three classification schemes are different enough to make direct comparisons of cause of death difficult, so our analysis is confined to the period 1979–98.32 Counts and rates of death can be obtained by place of residence (U.S. state and county), age, race (white, black, and other), gender, year, and underlying cause of death (four-digit ICD code or group of codes). There are seventeen age groups: under 1 day, 1–6 days, 7–27 days, 28–364 days, 1– 4 years, 5–9 years, 10–14 years, 15–19 years, 20–24 years, 25–34 years, 35– 44 years, 45–54 years, 55–64 years, 65–74 years, 75–84 years, over 85 years, and unknown. We excluded infant deaths (age less than one year) and deaths at unknown ages. For each approximately two-digit ICD9 code and year, we calculated two statistics: mean age at death,33 and the fraction of deaths that occurred before age sixty-five. Data on the number of deaths, population, and the crude death rate, by age group for 1979 and 1998 are presented in table 7.10. The crude death rate declined in every age group except the highest (over eighty-five years). Despite this, the crude death rate for the entire population increased, due to aging of the population. The fraction of deaths occurring before age sixty-five decreased from 32 percent in 1979 to 24 percent in 1998. Figure 7.9 shows a comparison of mean age at death (from all causes) to life expectancy at birth over the period 1979–97 (Anderson 1999).34 Life ex32. Data for 1979–99 are available at the website http://wonder.cdc.gov/mortsql.html. 33. We assumed that deaths in a given age interval occurred at the mean of the lower and upper ages of the interval; for example, deaths at ages 1–4 occurred at age 2.5. We assumed that deaths at ages greater than 85 occurred at age 89.5. 34. Life expectancy (ex )—the average number of years of life remaining for persons who have attained a given age (x)—is the most frequently used life table statistic. Calculation of the complete life table is derived from the probability of death (qx ), which depends on the number of deaths (Dx ) and the midyear population (Ps ) for each single year of age (x) observed during the calendar year of interest. There are two types of life tables: the generation or cohort life table and the current life table. The current life table (upon which these life expectancy figures are based) does not represent the mortality experience of an actual cohort. Rather, the current life table considers a hypothetical cohort and assumes that it is subject to
Pharmaceutical Knowledge-Capital Accumulation and Longevity Table 7.10
Number of deaths, population, and crude death rate, by age group, 1979 and 1998 1979
1998
Population
No. of deaths per 100,000 population
8,108 5,278 5,868 21,085 27,634 47,941 57,723 135,265 286,966 449,255 493,676 328,725
12,637,000 16,947,000 18,445,000 21,348,000 21,096,000 36,038,000 25,114,000 22,935,000 21,448,000 15,338,000 7,598,000 2,197,000
64 31 32 99 131 133 230 590 1,338 2,929 6,497 14,962
1,867,524
221,141,000
844
Age group
Death count
1–4 years 5–9 years 10–14 years 15–19 years 20–24 years 25–34 years 35–44 years 45–54 years 55–64 years 65–74 years 75–84 years Over 85 years Total
261
Population
No. of deaths per 100,000 population
5,251 3,530 4,261 13,788 16,839 42,516 88,866 146,479 233,724 458,982 681,663 612,575
15,189,749 19,920,862 19,241,808 19,539,327 17,674,134 38,774,410 44,519,859 34,584,884 22,675,970 18,395,293 11,952,189 4,053,650
35 18 22 71 95 110 200 424 1,031 2,495 5,703 15,112
2,308,474
266,522,135
866
Death count
Source: CDC Compressed Mortality File.
pectancy at birth is higher than mean age at death. For example, in 1997 life expectancy at birth was 76.5 years and mean age at death was 71.9 years. However, the 1979–97 increase in mean age at death (4.0 years) was greater than the increase in life expectancy at birth (2.6 years). 7.4 Empirical Results Estimates of equations (6) and (7) are presented in table 7.11. All equations are estimated via weighted least squares, where the weight is equal to the number of deaths.35 In the first column, the dependent variable is mean age at death, drugs not labeled for a given indication are excluded, and we do not distinguish between priority-review and standard-review drugs.
the age-specific death rates observed for an actual population during a particular period. Thus, for example, a current life table for 1997 assumes a hypothetical cohort subject throughout its lifetime to the age-specific death rates prevailing for the actual population in 1997. The current life table may thus be characterized as rendering a “snapshot” of current mortality experience and shows the long-range implications of a set of age-specific death rates that prevailed in a given year. 35. Let Nit represent the number of deaths from disease i in year t and Ni represent the average annual number of deaths from disease i during 1979–98. The estimates obtained when the weight is Ni are very similar to the estimates obtained when the weight is Nit . This reflects stability over time in the relative number of deaths caused by different diseases. About 81 percent of the increase in mean age at death during the period was within-disease increase; only 19 percent was between-disease increase (attributable to a changing mix of diseases).
262
Fig. 7.9
Frank R. Lichtenberg
Mean age at death versus life expectancy at birth, 1979–97
The coefficient on the total stock of drugs is positive but only marginally significant ( p-value .11). The second column is the same, except that unlabeled drugs listed in the NDDF Drug Indications Master Table are included. This has a modest positive effect on the point estimate of but reduces its standard error, so that the estimate is now highly significant ( p-value .02). This is consistent with the American Medical Association Council on Scientific Affairs’ observation that “the prevalence and clinical importance of prescribing drugs for unlabeled uses are substantial” and with the hypothesis that increases in the stock of (labeled and unlabeled) drugs to treat a condition increase the mean age at which people die from that condition. In the equations reported in columns (3) and (4), the dependent variable is an alternative statistic of the age distribution of deaths: the fraction of deaths that occur before the age of sixty-five. These estimates seem to confirm the estimates in the first two columns: when unlabeled indications are excluded, is insignificantly different from zero, but when they are included, is negative and highly significant, indicating that increases in the stock of drugs reduce the probability of dying before the age of sixty-five. In columns (5) and (6), the dependent variable is again mean age at death, but the stock of drugs is classified by therapeutic potential (i.e. disaggregated into priority-review and standard-review drugs). Whether or not unlabeled indications are included, P (the coefficient on the stock of priority-review drugs) is positive and highly significant, and S (the coefficient on the stock of standard-review drugs) is insignificantly different
0.01022 1.58 0.1147
Excluded
Mean age at death (1)
0.01268 2.27 0.0232
Included
Mean age at death (2)
–0.0001 0.81 0.417
Excluded
Fraction dying before age 65 (3)
–0.00029 2.66 0.0079
Included
Fraction dying before age 65 (4)
Estimates of equations (6) and (7)
0.05324 2.43 0.0152 –0.00691 0.65 0.5129
Excluded
Mean age at death (5)
0.07485 4.01 .0001 –0.00868 1.05 0.2946
Included
Mean age at death (6)
–0.00084 1.98 0.0484 0.00019 0.94 0.349
Excluded
Fraction dying before age 65 (7)
–0.00161 4.47 .0001 0.00017 1.05 0.2923
Included
Fraction dying before age 65 (8)
0.04501 2.51 0.0122
Excluded
Mean age at death (9)
0.06493 4.03 .0001
Included
Mean age at death (10)
–0.0006 1.76 0.0794
Excluded
Fraction dying before age 65 (11)
Notes: All equations are estimated via weighted least squares, where the weight is equal to the number of deaths. All equations include disease and year fixed effects.
DRUG_STOCK t-statistic p-value PRI_STOCK t-statistic p-value STD_STOCK t-statistic p-value
Unlabeled indications
Dependent variable
Table 7.11
–0.00142 4.56 .0001
Included
Fraction dying before age 65 (12)
264
Frank R. Lichtenberg
from zero. This is not surprising, since, as discussed earlier, priority-review drugs are those that represent a “significant improvement compared to marketed products, in the treatment, diagnosis, or prevention of a disease,” while standard-review drugs are those that “appear to have therapeutic qualities similar to those of one or more already marketed drugs.” Once again, the estimate of P is larger and more significant when unlabeled indications are included than it is when they are excluded. Columns (7) and (8) report analogous regressions, in which the dependent variable is the fraction of deaths that occur before the age of sixty-five. Once again, we cannot reject the null hypothesis that the stock of standardreview drugs has no effect on mortality, but the hypothesis that the stock of priority-review drugs has no effect on mortality can be rejected, especially when unlabeled indications are included. Since S is not significant in any equation in columns (5)–(8), in columns (9)–(12) we estimate models that include only the stock of priority-review drugs. The estimates of P in columns (9)–(12) are fairly similar to their counterparts in columns (5)–(8). We will use the estimate of P in column (10) to evaluate the contribution of pharmaceutical knowledge-capital accumulation to the increase in the mean age at death during the period 1979–98. Sample mean36 values of the dependent and independent variable in 1979 and 1998 are shown in table 7.12. Mean age at death increased by 3.8 years from 1979 to 1998: AGE_DEATH 3.8 years. The mean stock of priority-review drugs increased by 6.0 drugs: PRI_STOCK 6.0 drugs. The estimated contribution of the increase in the stock of priority-review drugs to the increase in mean age at death is P PRI_STOCK .065 6.0 0.39 years. The increase in the stock of priority-review drugs is estimated to have increased mean age at death by 0.39 years (4.7 months) during this period. Hence, about 10 percent of the total increase in mean age at death is due to the increase in the stock of priority-review drugs.37 Now we will attempt to compare the value of the longevity benefit of pharmaceutical knowledge-capital accumulation to its cost. During the period 1979–98, 508 NMEs (about 25 per year) were approved by the FDA. The Office of Technology Assessment estimated that the average cost of an NME approval is $359 million.38 More recently, DiMasi, Hansen, and Grabowski (2003) examined the research and development cost of sixty-eight randomly selected new drugs by conducting a survey of ten pharmaceutical firms. These data were used to estimate the average pretax cost of new drug development. The costs of compounds abandoned 36. These are weighted means, weighted by the number of deaths. 37. This estimate may be conservative, because it includes only the within-disease increase in mean age at death. 38. This is the cost of all NMEs approved—both priority-review and standard-review. About 40 percent of NMEs are priority-review NMEs.
Pharmaceutical Knowledge-Capital Accumulation and Longevity Table 7.12
Sample mean values of the dependent and independent variables in 1979 and 1998
Year
Mean age at death
Stock of priority-review drugs
1979 1998
69.6 73.4
4.6 10.6
3.8
6.0
Change
265
Note: Means are weighted by the number of deaths.
during testing were linked to the costs of compounds that obtained marketing approval. The estimated average out-of-pocket cost per new drug was $403 million (in 2000 dollars). They estimated that, on average, $121 million was spent during the preclinical phase and that $282 million was spent during the clinical phase. As figure 7.4 indicates, during the 1980s and 1990s the average duration of each phase was about six years, and the clinical phase was followed by an approval phase that lasted about two years. We will assume that annual expenditures were constant within each of the three phases: the industry spent about $20 million per drug in each of the first six years of drug development, $47 million per drug in each of the next six years (years 7–12) of drug development, and nothing during the approval phase (years 13–14). Hence, total expenditure on all 508 NMEs was $10.2 billion during each of the first six years of drug development, and $23.9 billion during each of the next six years (years 7–12) of drug development. The increase in the stock of priority-review drugs is estimated to have increased mean age at death by 0.39 years during this period. There are about 2 million deaths per year, so the total number of life-years gained per year is 0.39 2 million 800,000 life-years per year. To compute the social rate of return to new drug development, one must assign a value to a statistical life year (VSLY). Cost-effectiveness guidelines issued by the National Institute of Health (NIH) are based on a VSLY in the $50,000–$100,000 range.39 Murphy and Topel (2003) argue that $150,000 is a more appropriate figure for the VSLY. I will calculate the social (internal) rate of return using alternative assumed values of a statistical life-year, ranging from $25,000 to $150,000. However, if we suppose, for concreteness, that the VSLY is $50,000, then the value of the annual gain in life-years is 800,000 $50,000 $40 billion. Presumably, knowledge capital does not depreciate (although it can be rendered obsolete by future knowledge capital accumulation), so even if no new drugs were approved after 1998, people 39. According to these guidelines, a medical procedure is cost effective if the associated cost per life-year gained is below $50,000 and is not cost effective if the associated cost per life-year gained is above $100,000. If the associated cost per life-year gained is in the range $50,000– $100,000, cost-effectiveness is indeterminate.
266
Frank R. Lichtenberg
Fig. 7.10 Social rate of return to new drug development under alternative assumptions Note: When new drugs are assumed to account for 42 percent of the estimated longevity increase, and the value of a statistical life-year is assumed to be $25,000, the internal rate of return is not defined (i.e., it is negative).
would continue to experience the 0.39-year higher life expectancy in all future years. In other words, the $40 billion may be viewed as an annuity, which the American public would start to receive fifteen years after the beginning of drug development. Social (internal) rates of return to new drug development are shown in figure 7.10. The upper line shows the internal rate of return (IRR) for different assumed VSLY values, also assuming that 100 percent of the estimated longevity increase is due to new drugs per se (not to other medical innovations). For the range of VSLY values underlying the NIH costeffectiveness guidelines, the IRR is in the 10–15 percent range. A VSLY of $150,000 implies an 18 percent IRR. The lower line shows the IRR for different assumed VSLY values, assuming that only 42 percent of the estimated longevity increase is due to new drugs per se. As noted above, R&D investment by pharmaceutical firms is estimated to account for 42 percent of all (public and private) biomedical R&D investment. The IRR for a VSLY of $25,000 is not defined (i.e., it is negative). However, $25,000 is far below the range of VSLY values underlying the NIH cost-effectiveness guidelines. In this range ($50,000– $100,000), the IRR is in the 6–9 percent range. A VSLY of $150,000 implies a 12 percent IRR. For several reasons, these IRR estimates are likely to be conservative. The estimates along the lower line assume that pharmaceutical innovation is perfectly correlated across diseases with other medical innovation. In re-
Pharmaceutical Knowledge-Capital Accumulation and Longevity
267
ality, new drugs and other medical innovations are less than perfectly complementary; they may sometimes even be substitutes. None of the estimates account for the existence of benefits other than increased longevity, including reduced hospital expenditure and reduced limitations on work and other activities (see Lichtenberg 2001, 2005). Moreover, these rates of return reflect only the value of increased longevity among Americans, who consume only about 5 percent of the world’s drugs, by volume. 7.5 Summary and Conclusions People value leisure time as well as goods, so longevity increase is an important part of economic growth, broadly defined. R&D is the principal source of economic growth, and the pharmaceutical industry is the most R&D-intensive sector of the economy. In this paper we have assessed the contribution of pharmaceutical R&D to longevity increase (hence to economic growth) by analyzing the relationship between FDA approvals of NMEs and changes in the age distribution of deaths, using longitudinal disease-level data on all diseases. We computed the stock of drugs available (i.e., previously approved by the FDA) to treat a given condition in a given year by combining FDA data with data from First DataBank’s National Drug Data File. We used the CDC’s Compressed Mortality File to measure changes in the age distribution of deaths, by cause of death. The estimates indicated that approval of standard-review drugs—drugs whose therapeutic qualities the FDA considers to be similar to those of already marketed drugs—has no effect on longevity, but that approval of priority-review drugs—those considered by the FDA to offer significant improvements in the treatment, diagnosis, or prevention of a disease—has a significant positive impact on longevity. Increases in the stock of (labeled and unlabeled) drugs to treat a condition increase the mean age at which people die from that condition, and reduce the probability of dying before the age of sixty-five. The increase in the stock of priority-review drugs is estimated to have increased mean age at death by 0.39 years (4.7 months) during the period 1979–98. Ten percent of the total increase in mean age at death was due to the increase in the stock of priority-review drugs. The social rate of return to new drug development appears to have been quite high during the period 1979–98. But the average annual number of priority-review drugs approved by the FDA during the period 2000–2003 (8) is only 60 percent as great as the average annual number of priority-review drugs approved by the FDA during 1990–99 (13.4).40 Hence, there has recently been a 40 percent decline in 40. See http://www.fda.gov/cder/rdmt/pstable.htm.
268
Frank R. Lichtenberg
the average annual contribution of pharmaceutical innovation to longevity increase. Hopefully this decline will soon be reversed.
References Anderson, Robert. 1999. United States life tables. 1997. National vital statistics reports 47 (28). Hyattsville, MD: National Center for Health Statistics. Bahk, Byong-Hyong, and Michael Gort. 1993. Decomposing learning by doing in new plants. Journal of Political Economy 101:561–83. Christensen, L. R., and D. W. Jorgenson. 1973. Measuring the performance of the private sector of the U.S. economy, 1929–1969. In Measuring economic and social performance, ed. M. Moss, 233–351. New York: Columbia University Press. Cranston, J. W., M. A. Williams, N. H. Nielsen, and R. J. Bezman, for the American Medical Association Council on Scientific Affairs. 1998. Unlabeled indications of Food and Drug Administration–approved drugs. Drug Information Journal 32:1049–61. http://www.ama-assn.org/ama/pub/article/print/2036-2420 .html. DiMasi, Joseph A. 2001. New drug development in the United States from 1963 to 1999. Clinical Pharmacology and Therapeutics 69 (5): 286–96. DiMasi, Joseph A., Ronald W. Hansen, and Henry G. Grabowski. 2003. The price of innovation: New estimates of drug development costs. Journal of Health Economics 22 (2): 151–85. First DataBank. 1998. National drug data file documentation manual. San Bruno, CA: First DataBank. Food and Drug Administration (FDA) Center for Drug Evaluation and Research. 1999. From test tube to patient: Improving health through human drugs. September. http://www.fda.gov/cder/about/whatwedo/testtube-full.pdf Global Forum for Health Research. 2002. The 10/90 report on health research 2001–2002. Geneva: Global Forum for Health Research, May. http://www .globalforumhealth.org/pages/index.asp Griliches, Zvi. 1979. Issues in assessing the contribution of R&D to productivity growth. Bell Journal of Economics 10 (1): 92–116. ———, ed. 1984. R&D, patents, and productivity. Chicago: University of Chicago Press. Griliches, Zvi, and Frank Lichtenberg. 1984. R&D and productivity at the industry level: Is there still a relationship? In R&D, patents, and productivity, ed. Zvi Griliches, 465–96. Chicago: University of Chicago Press. Hulten, Charles R. 1992. Growth accounting when technical change is embodied in capital. American Economic Review 82 (4): 964–80. Jones, Charles I. 1998. Introduction to economic growth. New York: Norton. Jorgenson, Dale, and Zvi Griliches. 1967. Explanation of productivity change. Review of Economic Studies 34 (3): 249–83. Lichtenberg, Frank. 2001. Are the benefits of newer drugs worth their cost? Evidence from the 1996 MEPS. Health Affairs 20 (5): 241–51. ———. 2005. Availability of new drugs and Americans’ ability to work. Journal of Occupational and Environmental Medicine 47 (4): 373–80. Lichtenberg, Frank, and Donald Siegel. 1991. The impact of R&D investment on productivity: New evidence using linked R&D-LRD data. Economic Inquiry 29 (April): 203–28.
Pharmaceutical Knowledge-Capital Accumulation and Longevity
269
Lichtenberg, Frank, and Joel Waldfogel. 2003. Does misery love company? Evidence from pharmaceutical markets before and after the Orphan Drug Act. NBER Working Paper no. 9750. Cambridge, MA: National Bureau of Economic Research, June. http://www.nber.org/papers/w9750. Murphy, Kevin M., and Robert H. Topel. 2003. The economic value of medical research. In Measuring the gains from medical research: An economic approach, ed. Kevin M. Murphy and Robert H. Topel, 74–109. Chicago: University of Chicago Press. National Science Foundation (NSF). Survey of Industrial Research and Development, http://www.nsf.gov/sbe/srs/sird/start.htm. Nordhaus, William. 2003. The health of nations: The contribution of improved health to living standards. In Measuring the gains from medical research: An economic approach, ed. Kevin M. Murphy and Robert H. Topel, 74–109. Chicago: University of Chicago Press. Pharmaceutical Research and Manufacturers of America (PhRMA). 2004. Pharmaceutical industry profile 2004. Washington, DC: PhRMA. http://www.phrma .org/publications/publications//2004-03-31.937.pdf. Romer, Paul. 1990. Endogenous technological change. Journal of Political Economy 98 (October): S71–S102. Sakellaris, Plutarchos, and Dan Wilson. 2001. The production-side approach to estimating embodied technological change. Federal Reserve Board Finance and Economics Discussion Series no. 2001-20. Washington, DC: Federal Reserve Board, April. Solow, Robert M. 1956. A contribution to the theory of economic growth. Quarterly Journal of Economics 70 (February): 65–94. ———. 1962. Technical progress, capital formation, and economic growth. American Economic Review, Papers and Proceedings 52 (2): 76–86.
Comment
Eric J. Bartelsman
The main contribution of this paper is to provide estimates of the longevity impact of pharmaceutical R&D and estimates of the rate of return to this R&D. The bottom line of the paper is that pharmaceutical R&D investment provides a (monetized) return to U.S. residents of 18 percent through the effect of new and improved drugs on longevity. Further, 10 percent of the increase in longevity between 1979 and 1999 can be attributed to drug use. The study is important for a couple of reasons. The estimates may help guide policy to reduce the wedge between social and private returns to R&D, such as policy regarding intellectual property rights, patents, subsidies to R&D, and funding to the National Science Foundation, to name a few. Next, the paper contributes directly to the topic of this volume—namely, national accounting for intangible investment—by addressing questions such as these: what is the service flow generated by the unobserved stock of knowledge Eric J. Bartelsman is professor of economics and director of the Economic and Social Institute at the Free University of Amsterdam.
270
Frank R. Lichtenberg
built up by past research expenditures? How does R&D add to our wealth? How does the accumulation over time work its way into GDP figures? In Lichtenberg’s research, detailed information is drawn from a wide variety of sources, linked together, and aggregated to construct a data set with variation in the dimensions needed to identify the effect to be estimated. The choice of estimation strategy and the data-handling procedures were to a large extent predicated on the characteristics of the available raw data, and to a lesser extent on a theory describing the effect of pharmaceutical R&D on welfare of the nation. The theory says that the stock of pharmaceutical knowledge is an input into the production function for “extended income,” a welfare measure that includes leisure time and traditional GDP. The theory further states that net additions to the stock of knowledge are the output of an innovation function that has R&D performance as an input, alongside spillovers from the existing stock, as well as depreciation and obsolescence. The theory is the canonical endogenous growth model, with an innovation function and an output production function. The empirical work requires molding the theory to fit the possibilities allowed by the available data. The rate of return on R&D calculation done in the paper is effectively simple. To start, a measure of real output is needed. To avoid the difficult issues of deflation and quality adjustment, a real indicator is defined—namely, the increase in longevity. Some possible refinements could include different weights for increases in longevity for different age groups or for increases in longevity for individuals in other drug-using countries (these are set to zero in the output measure considered here). On the other side of the production function, there is a real input measure. As proxy for the productive flow from a stock pharmaceutical knowledge, the paper uses a count of the number of approved drugs that are available on the market each year to deal with each disease. The mean age of death or the percentage of people under age sixty-five is regressed on the number of approved drugs, with variation by disease over time, and fixed effects. While the method is straightforward, the data collection and organization turn out to be of crucial importance. Figure 7C.1 shows how the sources of data were used. The output indicator derives from a CDC data file on deaths per year by disease. The data used in the paper are aggregated by discreet age groups, while supposedly the collection of the data was by actual age. Because the lengthening of age through drugs may take place at the high tail of the distribution, the concern is that the top-coding of the discrete age categories may create bias in the estimates. The other place where categorization and aggregation may cause difficulties is in the definition of disease. At the highest level of the disease classification is the case where you get sick and then die. Lower down in the
Pharmaceutical Knowledge-Capital Accumulation and Longevity
Fig. 7C.1
271
Research design Lichtenberg
hierarchy, you get a sniffle from some specific virus and then you die. The question is this: at what level of aggregation do you have the proper variation in your X and Y variables to get the best result? And how can you decide upon the proper level in the disease hierarchy to match up the information on drugs and death? Unfortunately, the manner in which the data are collected and published limits the ability of the researcher to define the appropriate match. Other issues involved in calculating the rate of return are methods for valuing life-years. A considerable literature exists on this topic, and the paper uses standard estimates. More interesting for the calculation is the choice to use a stock of knowledge as the source for flow benefits to longevity. The direct approach would be to acknowledge that the R&D is embodied in the drugs and that the increase in person years actually comes from getting the prescription for the drugs. Without merged data on individual drug use and cause of death, the paper needs to work with the knowledge stock. The knowledge stock is proxied by a very simple measure: number of approved drugs available each year for each disease. However, it is difficult to assess whether it’s really the appropriate measure. A nice check on the validity, giving some more confidence in the measure, is to separately consider priority-review drugs. The stock measure was compiled by merging a file on drugs by disease with a file on drug approvals by year, as shown in figure 7C.1. Now, the only file available of drugs by disease is from 1999. That may be a bit of a problem. If certain drugs were on a similar file in earlier years but got dropped subsequently, there may be an upward bias in the stock increase. As an alternative, since the pharmaceutical knowledge is actually
272
Frank R. Lichtenberg
embodied in the drugs, data on actual prescriptions by drug, by vintage of the approval date, provide a better measure of flows from the stock. The regression coefficient on impact could then be allowed to vary by vintage and would allow some estimate of the rate of obsolescence. That seems like a difficult data collection problem, but it is precisely the type of economic data that the Census Bureau is mandated to collect: sales of goods by product class. Scanner data from pharmacies would do the trick. If the nomenclature of the products sales data from the census matched those from the drug approval process, the proposed method would be a snap. Besides data issues, real life-and-death problems make the estimation difficult. The interactions between diseases, cures, and causes of death require attention. The cause of death noted in the data file is the underlying cause precipitating the need for care, not the immediate cause of death. If certain drugs delay the onset of the immediate cause, this will not be measured. For this, a framework is needed with disease histories by person and transition matrixes by state. The empirics would gauge the effect of drugs on transitions from wellness to disease to another disease to the absorbing state of death. As a suggestion for more feasible future work, I would like to see an explicit statement of the knowledge production function. This would link the R&D expenditures to additions to the stock, and the available flow from the stock would relate to the real output. In this framework longevity is a function of drug prescriptions, and the knowledge production function shows the arrival rate of the new drugs as a function of the R&D inputs. Some recent work on simultaneous models of production and innovation may guide further research. This additional work would bring out the need to look at obsolescence of complementary inputs. Which wealth stocks related to health are written off owing to new drugs? Lichtenberg mentioned medical devices as being a possible reason why longevity could increase. I worry about the interaction between the various stocks on adding to the production of real output. For measuring asset stocks, it thus may not suffice for statistical agencies to do a study on depreciation rates once every few decades. Finally, I think the paper misses closure. The paper never provides an estimate for private returns to R&D, so I can’t evaluate the policy related to the wedge between social and private returns. Nor does the paper explicitly make a statement about the valuation of the current stock of intangible wealth from pharmaceutical knowledge. Having a ballpark guess of how large the intangible wealth of the sector is relative to its tangible wealth would be useful. My closing remarks are not directly related to the paper but are suggestions for intangible investments that statistical agencies could make to improve the ability of researchers to undertake projects of the type we have
Pharmaceutical Knowledge-Capital Accumulation and Longevity
Fig. 7C.2
273
Statistical system for the new economy
just discussed. Figure 7C.1 shows a schematic of the research approach that was used in the paper. The research was feasible because data from different sources were made conformable and put into a database where the endogenous and exogenous variables were defined over the same units, in this case disease and year. The difficulty and time involved in this study lie in lining up the classifications to match. If the underlying data are collected using classifications that are orthogonal, no useful matching could have been done. For example, data on drug sales, or R&D expenditures by field of endeavor, could not easily be matched to the disease categories used in the study. The basic strategy of statistical agencies is to line up data from various sources into so-called satellite accounts, at fairly high levels of aggregations. For this paper, satellite health accounts with dimensional detail that allowed linking of the various data sources would have helped. A statistical system for the new economy would allow researchers access to data in a manner that allows complicated studies at relatively low cost in time. The statistical agencies would then not only invest in primary data collection and tabulation of results into static tables. They would collaborate on definitions of classifications and hierarchies needed to related data collected from disparate sources. Figure 7C.2 presents a view of such a statistical system. The intangible investments needed for this are similar to the expenditures being made at present in the business sector, as other papers in this conference have discussed. Firms harness information technology to change structure within their organization and the way they transact with other organizations. This is done by investing in data dictionaries that describe in as few dimensions as possible, but in a very detailed manner, everything about the products that they are buying and selling. The collection agencies should do their best to listen to an integrating
274
Frank R. Lichtenberg
organization—a bureau of standards—to decide on the classification schemes to use. The agencies should review all their separate little databases and build the metadata necessary so that a researcher or policymaker with a question can remotely design a research protocol and write the program code. The program code would get passed through the metadata and be translated to generate research outcomes from the underlying data sources. I think that the investments in this direction by statistical agencies would build a stock of data that would greatly enhance the ability to undertake the type of projects as presented by Lichtenberg.
8 R&D in the National Income and Product Accounts A First Look at Its Effect on GDP Barbara M. Fraumeni and Sumiye Okubo
8.1 Introduction Research and development (R&D) has long been of interest to researchers and policymakers because of its potential impact on economic growth. However, its impact is difficult to determine from the current measures in the national income and product accounts (NIPAs) and the standard growth-accounting model. The NIPAs and the growth-accounting model do not treat R&D as investment and thus underestimate R&D’s contribution to national savings, the country’s stock of knowledge, and the economy as a whole. Moreover, with the current measures, the links between R&D and technical changes and between technical changes and growth in gross domestic product are uncertain. This paper treats R&D as investment rather than current expenditure. This treatment of R&D expenditures is a step toward producing more comprehensive and accurate measures of gross domestic product (GDP), gross domestic income (GDI), and national savings. This treatment also allows for better identification of the variables that are important to a
Barbara M. Fraumeni is chief economist of the Bureau of Economic Analysis, U.S. Department of Commerce. Sumiye Okubo is associate director for industry accounts at the Bureau of Economic Analysis, U.S. Department of Commerce. This paper represents views of the authors and is not an official position of the Bureau of Economic Analysis or the Department of Commerce. The authors would like to thank Brian Sliker for his work on updating the investment and capital stocks and his help in the writing of appendix C. David Kass, Mark Planting, and Robert Yuskavage also assisted in different aspects of the project. We would like to express our thanks to J. Steven Landefeld, Rosemary Marcuss, Brent Moulton, and Ralph Kozlow for their extensive comments and discussion.
275
276
Barbara M. Fraumeni and Sumiye Okubo
sources-of-growth analysis and, therefore, the determination of the contribution of R&D to economic growth.1 This paper is a preliminary and exploratory examination of the role of R&D in the U.S. economy. It constructs a partial R&D satellite account (R&DSA) and modifies the NIPA framework by capitalizing R&D.2 It analyzes the impact of R&D on GDP, national saving, and other macroeconomic aggregates, and identifies the contribution of R&D to economic growth, using a sources-of-growth approach. Accomplishing both objectives entails modifying both the expenditure side and the income side of the accounts. Capitalizing R&D requires modifying the NIPA structure by including R&D expenditures and benefits within the NIPA accounts. R&D expenditures are relatively easy to measure because data are available from the National Science Foundation (NSF). R&D benefits (returns to R&D capital) are much more difficult to measure, yet estimating them is critical to establishing a link between R&D, technical change, and growth in GDP. Among the measurement topics discussed in this paper are the magnitude of private and spillover returns to R&D, the R&D benefits that are already in the current measure of GDP, the lag with which R&D affects the economy, the rate of depreciation of R&D capital, and the appropriate deflator for R&D expenditures. As these measurement questions cannot be fully resolved here, this paper accordingly is an important, albeit only a first, look at the effect of R&D on the economy. Rates of return to R&D are drawn from past analyses of rates of return, and estimates of the R&D investment and capital stock balance sheet presented in a previously published Bureau of Economic Analysis (BEA) R&DSA are updated (Carson, Grimm, and Moylan 1994).3 8.2 Current Treatment of R&D in the NIPAs The NIPA measures of investment include plant, equipment, and inventories acquired by private businesses, nonprofit institutions, and government, and net foreign investment, and exclude household consumer durables, as well as most intangible capital, such as R&D and, until the 1999 comprehensive NIPA revision, software. R&D expenditures are treated as an intermediate input for businesses and current consumption for nonprofit institutions and general government.4
1. In the April 2002 version of this paper, a net return to general government capital was imputed as well as R&D’s being treated as investment. 2. For a description of a full R&DSA, see Fraumeni and Okubo (2000, 2001). The current NIPA convention will be described later in the paper. 3. For a description of how the estimates from that project were updated, see appendix C. 4. All R&D activities are allocated to the general government sector in the national accounts. See Bureau of Economic Analysis (1998, p. 5).
R&D in the National Income and Product Accounts
Fig. 8.1
277
Analysis of a product innovation
8.2.1 What the Current NIPA Includes and Excludes To identify what is included and what is excluded in current measures of GDP and GDI, we use the graph in figure 8.1.5 The graph shows the benefits from a product innovation that reduces the cost of an industry using the innovation. The product innovation reduces the price or cost per unit of output of the industry using the innovation from P1 to P2 and increases the output of that industry from Q 1 to Q 2 .6 The sum of all hatched areas, A, B, D, and E, shows the economic benefits from the innovation. Area A plus area B shows benefits derived by buyers of the product innovation. Area A is the difference between what buyers paid for Q 1 (P1 Q 1 ) before the innovation and what they pay for Q 1 after the innovation (P2 Q 1 ). Area B is the difference between the maximum amount consumers would have been willing to pay—as determined by the demand curve—for each marginal unit of the product greater than Q 1 up to Q 2 and what they actually paid P2 (Q 2 – Q 1 ). Area D plus area E shows the gross profit,7 equal to r dollars per unit, received by the innovator, which is assumed to be transferred from the firm using the innovation to the innovator; thus, potential spillover benefits to the innovation-using firm are appropriated by the innovating firm.8 The graph assumes that the market structure of the industry using 5. See Mansfield et al. (1977). 6. A similar graph could be used to illustrate the case of product innovation used by households. 7. The term gross profits is used because profits include depreciation on the investment in the innovation. 8. P2 – r is the innovators’ per-unit cost of production, and P2 is the price for the innovation received by the innovator under the assumption that the gross profit is transferred to the innovator. Griliches distinguishes between two types of spillovers. One is the spillover from purchasing the results of R&D inputs at less than their full quality price. This spillover reflects a problem of measuring improvements in the quality of capital and materials and their prices correctly. The second type is knowledge spillovers and refers to ideas borrowed by research teams in one industry from the research results from another industry. See Griliches (1995, pp. 65–66).
278
Barbara M. Fraumeni and Sumiye Okubo
the innovation is competitive and that the supply curve is perfectly horizontal in the relevant range. Because GDP measures what is actually paid for a product in any given time period, rather than what consumers would have been willing to pay, GDP in any given time period does not measure the consumer surplus gain associated with the innovation but only the resource saving, area D plus area E, which is indistinguishably included in the gross profits of the innovator and in the expenditures of the consumer. However, when one looks at changes in nominal GDP from one period to next, the resource savings to consumers on the old quantity, area A, will likely be included in the change in GDP as spending on other goods and services.9 Thus, what will not be measured in nominal GDP will be the triangle area B, the Harberger triangle.10 What is included in changes in real GDP is a bit more complicated. A Laspeyres index of changes in real GDP will overestimate the change in real GDP in period two because it will include not only the entire consumer surplus gain—areas A and B—but also the area C. This overestimate occurs because the Laspeyres index values the entire output in period two at period-one prices (P1 Q 2 )/(P1 Q 1 ). A Pasche index will underestimate the change because it includes none of the consumer surplus gain. The Pasche index values the period-one output at period-two prices and thus includes only the areas D and F for period one in valuing the increase in real GDP in period two (P2 Q 2 )/(P2 Q 1 ). However, the Fisher, a chain index, which is a geometric mean of a Laspeyres and Pasche index, approximately includes the value of the consumer surplus gain area B excluded from the nominal GDP calculation, and the change in real GDP will be an average of the changes produced by the Pasche and Laspeyres indexes. In contrast to the simple example presented, the innovator is often not paid for the spillover benefits to firms using the innovation. This paper, however, assumes that market benefits to business R&D, private and spillover,11 are included somewhere in current national accounts measures.12 In the following estimates, they are assumed to appear in GDP and aggregate property income estimates. (Extension of this aggregate analysis to the industry level would have to address this issue.)13 9. Unless labor and/or capital supply is reduced. 10. The Harberger triangle is defined as the consumer surplus gain with a decrease in price from P1 to P2 excluding the consumer surplus gain that is part of the total amount P1 Q1 paid for the product at the higher price. 11. Frequently, private and spillover returns are called direct effects and indirect effects, respectively. Indirect effects (spillovers) include the benefits from the use of higher-quality or new inputs developed through R&D undertaken by others and benefits from technology transfers. See Bureau of Labor Statistics (BLS; 1989, p. 5). 12. Spillover effects from business can also include unpaid-for benefits from new and improved consumer goods and services. 13. To undertake an industry analysis including spillovers, interindustry technology flow estimates similar to those developed by Scherer (1984) would have to be constructed.
R&D in the National Income and Product Accounts Table 8.1
a
279
Estimated rates of return to private R&D (%) Source
Private
Social
Sveikauskas (1981) Bernstein and Nadiri (1988) Bernstein and Nadiri (1991) Nadiri (1993) Mansfield et al. (1977) Goto and Suzuki (1989) Terleckyj (1974) Scherer (1982, 1984)
7–25 10–27 15–28 20–30 25a 26 29 29–43
50 11–111 20–110 50 56a 80 48–78 64–147
These rates are median rates.
A number of researchers have estimated the private and social rates of return to private R&D capital. In general, these returns are gross returns, including both the net return to capital and depreciation (BLS 1989, p. 39). Estimated rates of return to private R&D,14 arranged by the lowest to the highest private rate-of-return estimates, are summarized in table 8.1. Private rates of return average from 20 to 30 percent. These private rates of return reflect the returns received by the innovator. Social rates of return, which include the spillover benefits, are much higher, ranging from an average lower bound of about 30 percent to an average upper bound of 80 percent. Although researchers have in various ways attempted to include nonmarket benefits, for the most part they reflect spillovers that we assume are already included in GDP. The private rates of return to R&D based on these studies are considerably higher than the average returns to other types of investments. It can be argued that R&D investments would require a higher rate of return than other investments because of the risk and uncertainty attached to R&D. There are more failures than successes associated with R&D investments— the rule of thumb often used is that for every successful project, ten projects fail. In addition, businesses investing in the R&D must take into account the likelihood of imitation by competitors, and also the uncertainty in the timing of commercialization of the R&D project, especially for basic and applied research. Because of the wide range of estimated rates of return, the assumption made is that the average private rate of return is 25 percent and the average social rate of return, which includes spillovers, is 50 percent.15 14. See Council of Economic Advisers (CEA; 1995, p. 5). The CEA table is adapted from Griliches (1992) and Nadiri (1993). Leo Sveikauskas has pointed out in conversations that the rates of return shown in the CEA table represent different types of returns. For example, what is called a private return may be a return not only to the R&D performer but also to all other firms within the industry. However, his conclusion is that the rates of return used by Fraumeni and Okubo are reasonable rates of return, if anything perhaps somewhat too low. 15. A recent Joint Economic Committee Staff Report of the U.S. Congress (1999, p. 12) concluded that it is reasonable to assume that the private rate of return is about 25 percent
280
Barbara M. Fraumeni and Sumiye Okubo
In contrast to the returns to private R&D, the returns to nonprofit institutions and general government R&D are less likely to be included in the existing measure of GDP, partly because of the way in which nonprofit institutions and general government are counted in GDP and partly because of the different nature of nonprofit institutions and general government R&D. Because the output of nonprofit institutions and general government is for the most part not sold in markets, it is assumed to be equal to their input costs. And because no input cost is associated with R&D beyond the original investment period and the R&D output is not sold in markets, no direct value is put on the returns to the R&D of nonprofit institutions and general government, and the value of these types of institutions’ R&D is understated. Second, because much of the output of nonprofit institutions and general government R&D is likely to be in nonmarket goods and services, such as reduced morbidity and mortality, it is less likely to be included in GDP, which is a measure of market goods and services. Because nonprofit institutions and general government tend to focus their R&D on nonmarket benefits and do not have to pass the market test that private firms do, their rate of return on R&D is arbitrarily assumed to be one-third smaller than the return to private R&D; that is, the private rate of return for nonprofit institutions and general government is assumed to be 16.7 percent, and the social rate of return, 33.4 percent. Table 8.2 lists the assumptions made regarding what type of benefits are included in the current NIPA tables and what benefits are included in the estimates in this paper. It also lists the assumptions made about the gross rate of return on R&D capital for all performing sectors, the R&D deflator, the depreciation rate, and the lag structure. 8.2.2 Effects of the Current Treatment A country’s national accounts ideally provide measures of the composition and growth of its economic activity. To the extent that the economic accounts do not include all economic activities or classify some expenditures as intermediate when they actually represent a final use, the accounts are an incomplete basis for measuring a country’s growth. In particular, not taking into account R&D investment and other types of intangible capital such as education embodied in human capital, understates investment, net wealth, and national savings.16 This understatement is larger if intangible capital has risen in importance in the U.S. economy over the past decade, as some have argued.17 and that the social rate of return is about twice as high as the private rate. The Economic Report of the President concludes that the social return to R&D averages about 50 percent (Economic Report of the President 1995, box 3–5, p. 122). A recent study of patenting by R&D laboratories of a manufacturing firm conducting R&D estimated the average private rate of return to product R&D to be about 21 percent (Arora, Ceccagnoli, and Cohen 2002). 16. See Eisner (1989, chaps. 1 and 2) and Kendrick (1976, pp. 9–11). 17. See, for example, Nakamura (2001).
R&D in the National Income and Product Accounts Table 8.2
281
Assumptions Benefits Current measures
Return to business R&D capital Return to nonprofit institutions and general government R&D capital
Social benefits included Spillover benefits included
Adjusted measures (%) No change: Social benefits included Private and spillover benefits included
Gross rates of return (%) Rates of return on: Private R&D Nonprofit institutions and general government R&D (2/3 of the above rates)
Private return
Spillover return
Social return
25
25
50
16.7
16.7
33.4
Other Deflator Private fixed nonresidential investment
Depreciation rate (%)
Lag
11
One year
8.3 Proposed Treatment of R&D R&D uses resources to create products or output for future, rather than current, consumption, and in many cases it provides output and benefits long into the future, especially with seventeen-year—or, more recently, twenty-year—patent protections.18 In this way, R&D resembles investment more closely than it does intermediate inputs or current consumption. R&D adds to the stock of knowledge or productive capital and wealth, and it provides a flow of services from this stock over time, rather than in one period, depreciating over time like plant and equipment. Accordingly, R&D expenditures should be capitalized and depreciated and treated the same as other NIPA investment. In this paper, R&D input refers to R&D performance (as opposed to funding) by business, government, and nonprofit institutions.19 The methodology assumes constant returns to scale and assumes that factors, including R&D capital, are chosen to minimize costs and are hired until the marginal revenue products of these factors are equal to their purchase price.20 18. Benefits from R&D continue past the end of patent protection, or, alternatively, they may disappear before the end of patent protection. In addition, R&D benefits may shift toward nonperformers, such as other firms with spillovers, or to consumers. 19. Using a performer basis begs the question of whether it is the performer or the funder, if different from the performer, who receives the private return. 20. See equation (2) of this paper and annex 3 of OECD (2001), particularly pages 124–26.
282
Barbara M. Fraumeni and Sumiye Okubo
8.3.1 Capitalizing R&D Capitalizing R&D produces several changes in the national accounts, in terms of the composition and level of GDP. These changes are summarized in table 8.3 and described in the following three subsections. The first change is to treat R&D expenditures by business as investment on the expenditure side and not as an intermediate expense. Because R&D is no longer considered an expense, property-type income21 increases by an amount equal to the expenditure, reflecting changes in profits and depreciation of R&D capital. The second change is to reclassify R&D expenditures by nonprofit institutions and general government from consumption to investment.22 Consumption of nonprofit institutions serving persons is part of personal consumption expenditures (PCE) in the accounts, and general government consumption is part of government consumption.23 Third, capitalizing R&D expenditures of nonprofit institutions and general government increases consumption by an amount equal to private returns to nonprofit institutions and general government R&D. These private returns can also be called imputed services from the R&D capital consumed by nonprofit institutions and general government in the current period, a terminology that highlights the consumption nature of these returns. On the income side, depreciation and profits (or current surplus of general government) are increased by an amount equal to private returns to nonprofit institutions (or private returns to general government R&D).24 Spillover returns, regardless of their source, are assumed to be already included in GDP. Thus, on the income side of the national accounts, all returns to R&D capital can be identified—private returns to nonprofit institutions and general government R&D can be added to spillover returns that are already included in GDP: that is, spillover returns from nonprofit institutions and general government R&D and social returns to business capital. Treating R&D expenditures as investment in the NIPA would make these expenditures fully comparable to other expenditures on intangibles, 21. Property-type income is defined as the sum of corporate profits, proprietors’ income, net interest, capital consumption allowances, inventory valuation adjustments, rental income of persons, business transfer payments, and surplus of government enterprises, less subsidies. Alternatively, it is gross domestic income (GDI) less compensation of employees, indirect business tax and nontax liabilities, and the statistical discrepancy. 22. The expenditures include those by federally funded research and development corporations (FFRDCs). Government entities that perform R&D, such as public colleges and universities, are all classified as being part of general government. 23. Some nonprofit institutions serve business, not persons, but this is a quite small percentage of nonprofit institutions’ economic activity. Nonprofit institutions serving business are treated as businesses in NIPA. 24. Current surplus for general government is the category comparable to profits for private entities.
Consumption (PCE)
Government consumption
Nonprofit institutions
General government
Note: PCE personal consumption expenditure.
Intermediate input
Treatment in current-measure GDP Adjusted GDP
Gross domestic product
1) Reallocate to investment 2) Increase in consumption private R&D returns; spillover returns already in GDP 1) Reallocate to investment 2) Increase in consumption private R&D returns; spillover returns already in GDP
Reallocate to investment
Changes to the national accounts
Business
R&D imputations, R&D performed by:
Table 8.3
1) No change 2) Increase
1) No change 2) Increase
Increase
Change in current-measure GDP
Increase in private returns to R&D capital; spillover returns already in GDI
Increase in profits and depreciation Increase in private returns to R&D capital; spillover returns already in GDI
Adjusted GDI
Increase
Increase
Increase
Capitalizing R&D change in current-measure GDI
Gross domestic income
284
Barbara M. Fraumeni and Sumiye Okubo
such as software, that are already considered investments.25 This treatment represents a step toward producing a comprehensive and more accurate measure of investment and savings in the U.S. NIPA, as well as capital stock and depreciation, the value of services from R&D and other fixed capital and net domestic product, and, as a result, improved measures of economic output and growth (Eisner 1989, 14–17). It provides a basis for addressing important macroeconomic, technology, and tax policy concerns and better informs policymakers about the true size of national saving and the nature of choices being made between current and future consumption. Although this treatment provides conceptually improved estimates of output and growth, R&D is not treated as investment in the NIPAs for several reasons. First, R&D expenditures do not have an easily identifiable set of assets that can be measured or valued in a balance sheet.26 Unlike plant and equipment and software, R&D capital is not generally sold for a market price. Thus, estimating services from R&D capital cannot be easily imputed from a representative set of market values as can be done, for example, with imputed rents from owner-occupied housing. It is usually measured on a cost basis and does not represent a final demand value. Second, the rate of return to business R&D is included in the returns to all fixed capital—plant, equipment, and R&D; separating out the returns to R&D is as thorny a problem as estimating services of R&D capital. A third and related problem is one of appropriability: other private producers may also benefit from the R&D, either as imitators or as buyers of the new product incorporating the new technology. Also difficult to determine are spillover benefits from nonprofit institutions’ and government R&D investments, as well as those spillovers, such as pollution reduction R&D, from which society as a whole benefits and for which no market exists. Other problems in measuring R&D capital and R&D services include the choice of deflators, service lives, depreciation, the rates of return, and the lag structure, or the length of time before the benefits from R&D are realized. These problems create uncertainty with estimates of R&D capital and its rate of return, but they can be addressed by using a supplemental or satellite account. Satellite accounts provide a means of experimenting with methods of estimating R&D capital and alternative scenarios of R&D returns to get a picture of the order of magnitude of the size and impact of R&D capital on GDP, without reducing the usefulness of the main accounts. The R&D satellite account tests the sensitivity of the estimates us25. Software was capitalized beginning with the 1999 comprehensive NIPA revision. For a description of the methodology and quantitative impacts, see Bureau of Economic Analysis (2000). 26. See System of National Accounts 1993 (Commission of the European Communities et al. 1993) for discussion of treatment of R&D in the national accounts (pp. 9–10, para. 1.51).
R&D in the National Income and Product Accounts
285
ing alternative assumptions about the R&D deflator, depreciation rates, and the lag structure.27 8.3.2 Adjusted National Account Tables Treating R&D as investment and computing rates of return to R&D change the estimates of GDP and the components of the accounts. Tables 8.4, 8.5, and 8.6 show revised NIPA tables incorporating these changes and expanding the detail for R&D. The tables highlight the changes and show data in 1996 dollars for selected years: 1961, 1966, 1973, 1995, and 2000. These estimates are discussed in section 8.4.1.28 Tables 8.4 and 8.6 show the changes to the national accounts by providing a numerical link between current-measure GDP and adjusted GDP for these selected years. Table 8.4 focuses on expenditure components (GDP); table 8.6 focuses on income components (GDI). The difference between GDP and GDI is the statistical discrepancy, which is unaffected by the innovations in the R&DSA. Table 8.5 shows adjusted measures for several other NIPA aggregates. In the adjusted-GDP table (table 8.4), which is based on NIPA tables 1.1 and 1.2, estimates of the private return to R&D capital, broken out by net return and depreciation, are included under consumption for nonprofit institutions and general government. Under all investment categories, “Completed research and development” and “Change in R&D-in-progress” are added. The sum of these two categories is equal to R&D expenditures. These two subcategories of investment exist because we assume that R&D expenditures enter the capital stock and benefits from R&D are realized with a one-year lag. Accordingly, completed R&D is equal to expenditures in the previous year, and the change in R&D-in-progress is equal to the difference in expenditures between year t and t – 1.29 The addenda to table 8.4 show how the current measure of GDP changes when R&D is capitalized, and show separately a R&D component and a net return component in the adjusted GDP. First, R&D investment is listed as an addition, but a significant portion of R&D expenditures is subtracted because it is included in current-measure GDP. This portion includes R&D expenditures of most nonprofit institutions and of general government.30 The largest component of R&D expenditures excluded from 27. Alternative assumptions are discussed in appendix C, and the effect of alternative assumptions on the results is presented in appendix A. 28. This paper does not consider international activities, and therefore there are no data for the bolded entries under net exports of goods and services. 29. Lags are discussed in appendix C, and alternative lag scenarios are presented in appendix A. 30. Some government R&D investment is already capitalized in the current national accounts measures. Adjustments are made to deduct what can be specifically identified: R&D software defense expenditures, from the estimates of other than R&D investment, capital stock, and depreciation.
Table 8.4
Estimated components for adjusted gross domestic product (in billions of 1996 dollars) 1961
1966
1973
1995
2000
Adjusted GDP
2,446
3,254
4,169
7,705
9,475
Personal consumption expenditures Households Durable goods Nondurable goods Services Nonprofit institutions Durable goods Nondurable goods Services Private returns to R&D capital Net return Depreciation Other services Gross private domestic investment Business fixed investment Nonresidential Structures Equipment and software Residential Completed R&D Change in R&D-in-progress Change in business inventories Nonprofit institutions fixed investment Nonresidential Structures Equipment and software Residential Completed R&D Change in R&D-in-progress Net exports of goods and services Exports Goods Services R&D expenditures Other services Imports Goods Services R&D expenditures Other services Government consumption expenditures and gross investment Consumption expenditures Goods Services Private returns to R&D capital
1,541
2,008
2,681
5,087
6,271
2 1 1
4 1 2
7 2 5
17 6 11
22 7 14
304
483
652
1,286
2,001
33 1
43 4
47 2
121 12
195 18
3 1 (18)
5 0 (40)
6 0 (62)
16 1 (78)
21 1 (399)
673
834
901
1,411
1,584
7
10
17
39
49
R&D in the National Income and Product Accounts Table 8.4
(continued)
Net return Depreciation Other services Fixed investment Nonresidential Structures Equipment and software Residential Completed R&D Change in R&D-in-progress Current measure GDP R&D R&D fixed investment Business fixed investment Nonprofits institutions fixed investment General government fixed investment Private returns to R&Da Nonprofits institutions returns to R&D General government returns to R&D Less R&D expenditures in current measure GDP Adjusted GDP
287
1961
1966
1973
1995
2000
2 5
4 7
6 11
13 26
17 32
7 1
13 1
16 1
35 2
47 3
Addenda 2,432 23 45 34 3 8 9 2 7
3,228 36 67 47 6 14 14 4 10
4,123 55 73 49 6 17 24 7 17
7,544 161 187 133 16 37 56 17 39
9,224 253 285 213 22 50 71 22 49
30
45
42
81
103
2,446
3,254
4,169
7,705
9,475
Notes: The value of some entries is affected by rounding. Italic numbers appear as listed in the addenda to the table. The sum of the bold numbers for completed R&D and change in R&D-in-progress appear in the addenda to the table. Bold italic titles show R&D components. a The returns to R&D listed here are only returns to R&D performed by nonprofit institutions and general government (private returns) as other returns to R&D, specifically spillovers which are all assumed to fall to business, and returns to business performers, are assumed to be already included in the current measure of GDP. See table 8.2, the addenda to table 8.6, and the text section “What the Current NIPA Includes and Excludes.”
current-measure GDP is expenditures for R&D performed and funded by business.31 The subtractions are a smaller part of R&D fixed investment in 2000 than they are in 1961 and reflect the larger share of R&D performed and funded by business in all R&D in 2000.32 Of total returns to R&D, only 31. General government funds R&D performed by business and nonprofit institutions, business funds R&D performed by nonprofit institutions and general government, and nonprofit institutions fund R&D performed by general government. Of these five cross-funding categories, only general government funding represents more than 1 percent of total R&D expenditures. According to our estimates, from 1961 to 2000 general government funding of business represented on average 34 percent of total R&D expenditures, while general government funding of nonprofit institutions represented 7 percent. The treatment of all crossfunding, except in the case of general government funding business, depends on whether the funding is a transfer or a contract. 32. See the discussion in appendix B comparing 1995–2000 to 1961–1966.
288
Barbara M. Fraumeni and Sumiye Okubo
Table 8.5
Estimated components for relation of adjusted gross domestic product, adjusted gross national product, adjusted net national product, adjusted national income, and adjusted personal income (in billions of 1996 dollars)
Adjusted GDP
1961
1966
1973
1995
2000
2,446
3,254
4,169
7,705
9,475
2,462
3,274
4,205
7,725
9,469
16
26
39
96
125
1
2
5
11
14
5
7
11
26
32
2,239
2,993
3,805
6,675
8,060
2,447 2,463 1,846
3,228 3,248 2,525
4,145 4,182 3,315
7,678 7,698 6,620
9,597 9,591 8,367
() Plus: Income receipts from the rest of the world and less income payments to the rest of the world () Equals: Adjusted GNP (–) Less: Consumption of fixed capital Business Consumption of R&D capital Consumption of other fixed capital Capital consumption allowances for other fixed capital (–) Less: Capital consumption adjustment for other fixed capital Nonprofit Institutions Consumption of R&D capital Consumption of other fixed capital Capital consumption allowances for other fixed capital (–) Less: Capital consumption adjustment for other fixed capital Government General government Consumption of R&D capital Consumption of other fixed capital Government enterprises () Equals: Adjusted net national product Addenda Adjusted gross domestic incomea Adjusted gross national incomeb Adjusted net domestic product
Note: Bold italic titles show R&D components. a Adjusted gross domestic income deflated by the implicit price deflator for adjusted GDP. b Adjusted gross national income deflated by the implicit deflator for adjusted GNP.
returns to the performer are a net addition to GDP, for the reasons outlined previously. Table 8.5 is based on NIPA tables 1.9 and 1.10.33 Including R&D in the accounts requires estimates of “consumption of R&D capital” and its subcategories “capital consumption allowances” and “capital consumption adjustment” for all performing sectors when relevant. “Capital consumption allowances” refers to tax-return-based depreciation, and “consumption of fixed capital” refers to economic depreciation as estimated by the 33. Table 1.10 in NIPA, the 1996 dollar table, has significantly fewer entries than table 1.9.
Table 8.6
Estimated adjusted components for components of gross domestic product, by industry group (in billions of 1996 dollars)
Adjusted GDP Private industries Compensation of employees Indirect business tax and nontax liability Adjusted property-type income Returns to business capital Returns to R&D capital Net return Depreciation Returns to other capital Net return Depreciation Returns to nonprofit institutions capital Returns to R&D capital Net return Depreciation Returns to other capital Net return Depreciation Statistical discrepancy Government Compensation of employees Indirect business tax and nontax liability Adjusted property-type income Returns to general government capital Private returns to R&D capital Net return Depreciation Current measure GDP a
Returns to R&D capital Returns to business capital Returns to nonprofit institutions capital Returns to general government capital (–) Less: Returns to R&D capital included in current measure GDP All returns to business capital Spillover returns to nonprofits institutions capital Spillover returns to general government capital Net increase in R&D expenditures in GDP R&D fixed investment (Table 8.4) (–) Less: R&D expenditures in current measure GDP (Table 8.4) Adjusted GDP
1961
1966
1973
1995
2000
2,446
3,254
4,169
7,705
9,475
73 57 16
120 94 26
178 139 39
438 341 96
567 442 125
3 2 1
7 5 2
14 9 5
34 23 11
44 29 14
14 9 5
21 14 7
35 23 11
78 52 26
98 66 32
Addenda 2,432
3,228
4,123
7,544
9,224
91 73 3 14
148 120 7 21
226 178 14 35
549 438 34 78
709 567 44 98
82 73
134 120
202 178
493 438
638 567
2
4
7
17
22
7 14 45
10 22 67
17 30 73
39 106 187
49 182 285
30
45
42
81
103
2,446
3,254
4,169
7,705
9,475
Note: The value of some entries is affected by rounding. Italic numbers appear as listed in the addenda to the table. Bold italic titles show R&D components. a Returns to R&D capital listed here include all returns to R&D capital (e.g., both private and spillover returns).
290
Barbara M. Fraumeni and Sumiye Okubo
BEA.34 Estimates are also needed for “subsidies less current surplus of government” and “current surplus of general government,” as the surplus will change because of the R&D capital imputation. Table 8.6 shows changes needed in the table “Components of GDP by Industry Group”35 to reflect returns to R&D capital for all performing sectors. As before, net return and depreciation are shown separately. For government property-type income, returns to R&D capital are estimated for general government. The addenda to table 8.6 present a comparison similar to that shown in the addenda to table 8.4. The total of all returns to R&D capital is shown in the first block of entries after the entry for current-measure GDP. The main body of the table separates these returns by sector—that is, private or government—and breaks them out into net return and depreciation. All (social) returns to business R&D and spillover returns to nonprofit institutions and general government are subtracted because they are included in current-measure GDP. As is required by a double-entry national accounts system such as the NIPA, GDP from the expenditure (product) side is exactly equal to GDP from the income side.36 8.3.3 Equations Comparing Current-Measure GDP with Adjusted GDP Another way to look at the relationship between current-measure GDP and adjusted-measure GDP is through equations. The expenditure equation in nominal dollars is (1)
GDP PCC PII PNENE PGG,
where PCC is consumption, PII is investment, PNENE is net exports, and PGG is government expenditures. Looking at the expenditure components of GDP to compare current measures to adjusted measures is useful: (2)
Current-measure PCC PCHC H PC,NC N,
where PC,HC H is personal consumption expenditures (PCE) by households and PC,NC N is PCE by nonprofit institutions that serve persons. (3)
Adjusted-measure PCC current-measure PCC PPR,N PRN PC,R&DRC,R&D ,
where PPR,NPRN is private returns to R&D performed by nonprofit institutions and PC,R&DRC,R&D is R&D expenditures in current-measure GDP, 34. Some activities of nonprofit institutions are taxable; therefore, capital consumption adjustment for nonprofit institutions may be nonzero. 35. Table 6 is modeled after table 3 from Lum and Moyer (2001, p. 27). 36. The statistical discrepancy, which is the difference between GDP on the expenditure (product) side and GDI on the income side, is not shown, as it is unchanged and has no impact on the estimates or the analysis.
R&D in the National Income and Product Accounts
291
which are reclassified from consumption to investment. The R&D expenditures in consumption in current-measure GDP include nonprofit institutions and general government funding of R&D performed by others, and business funding of R&D performed by nonprofit institutions and general government. (4)
Current-measure PII PI,BIB PI,NIN,
where PI,BIB is business fixed investment and PI,NIN is nonprofit institutions’ fixed investment. (5)
Adjusted-measure PII current-measure PII PR&D,B R&DB PR&D,N R&DN,
where PR&D,BR&DB is expenditures for R&D performed by business and PR&D,NR&DN is expenditures for R&D performed by nonprofit institutions. (6)
Current-measure PNE NE adjusted-measure PNE NE,
as there is no change in the net exports measure as a result of the innovations. (7)
Current-measure PGG PC,GCG PL,GG,
where PC,GCG is government consumption expenditures and PI,GG is government fixed investment. (8)
Adjusted-measure PGG current-measure PGG PPR,GPRG PR&D,G R&DG PI,R&DRI,R&D,
where PPR,GPRG is private returns to R&D performed by general government, PR&D,GR&DG is expenditures for R&D performed by general government, and PI,R&DRI,R&D is R&D expenditures for R&D funded by general government and performed by business contractors. The R&D funded by general government and performed by business that is included in the current measure of government consumption is reallocated to expenditures for R&D performed by business, PR&D,BR&DB, and included in adjusted measure PII. The magnitude of the additions to investment exceed the magnitude of the additions to consumption, whether including or excluding government consumption and investment in the totals. Capitalizing R&D decreases consumption, on net, as the increase in consumption from PPR,NPRN and PPR,GPRG, private returns to R&D performed by nonprofit institutions and general government, respectively, is less than the reduction in consumption from PC,R&DRC,R&D, expenditures in current-measure GDP, which are reclassified from consumption to investment. Investment unambiguously goes up as PR&D,BR&DB, PR&D,NR&DN, and PR&D,GR&DG, expenditures for
292
Barbara M. Fraumeni and Sumiye Okubo
R&D performed by business, nonprofit institutions, and general government, respectively, are all added to GDP.37 The income equation in nominal dollars is (9)
GDP GDI SD wH rS IBT SD,
where SD is the statistical discrepancy,38 wH is employee compensation, rS is property-type income, IBT is indirect business and nontax liability, and income-side current measure GDP is equal to expenditure- (product-) side GDP as guaranteed by equation (9).39 Looking at the details, this time on the income side, and comparing current measures to adjusted measures are again useful: (10)
Current-measure wH adjusted-measure wH,
(11)
Current-measure IBT adjusted-measure IBT,
(12)
Current-measure SD adjusted-measure SD,
because the innovations have no effect on employee compensation, indirect business taxes, or the statistical discrepancy. Property-type income is the only component of GDI that changes with the innovation. (13)
Current-measure rS rPSP rGSG rP,BSP,B rP,NSP,N rGSG,
where rPSP is private property-type income, rGSG is government propertytype income, rP,BSP,B is business private property income, and rP,NSP,N is nonprofit institutions’ private property income. The adjusted measure of private property-type income increases by the private returns to R&D to private nonprofit institutions and to government. (14)
Adjusted-measure rS rO,BSO,B rO,NSO,N rO,GSO,G rR&D,BSR&D,B rR&D,NSR&D,N rR&D,GSR&D,G,
where property-type income is separated into a component for “other than R&D capital,” indicated by the subscript ‘O,’ and an R&D component, indicated by the subscript ‘R&D.’ Equation (9) holds for adjusted measures as it did for current measures.
37. The difference between current-measure GDP and adjusted GDP is shown in 1996 dollars in the addenda to table 4. The two entries for R&D expenditures in current-measure GDP—PC,R&DRC,R&D , the portion already in consumption, and PI,R&DRI,R&D , the portion already in investment—are combined into one line. 38. Gross domestic product does not equal GDI because their components are estimated using largely independent and less-than-perfect source data; the difference between the two measures is the statistical discrepancy. See Parker and Seskin (1997, p. 19). 39. This paper will not discuss the estimation of property-type income, employee compensation, or the component parts, as this is not the primary purpose of this paper.
R&D in the National Income and Product Accounts
293
8.4 Effects of the Proposed Changes in Estimates Capitalizing R&D increases the level of real and nominal GDP and affects the components of the accounts. It has a very small effect on the rate of growth of real GDP, but a significant effect on the composition of GDP and on our understanding of the sources of economic growth. Capitalizing R&D also raises investment and therefore savings and GDP. Specifically, over the 1961–2000 period, the following phenomena occurred:
• The rate of growth of real GDP is increased by less than 0.1 percent• •
• • • •
age point, and the nominal level of GDP is increased by 2 percentage points. The distribution of consumption and investment in the economy is changed, and the national savings rate is raised by 2 percentage points, from 19 to 21 percent. The analysis shows that R&D is a significant contributor to economic and productivity growth, in the base case scenario with the contribution of R&D investment accounting for 4 percent of overall GDP growth and the contribution of returns to R&D capital accounting for 11 percent of GDP growth. R&D investment is on average 13 percent of current fixed investment; R&D fixed capital stock adds 6 percent of current fixed capital stock. The share of property-type income in GDI rises by 1 percentage point compared to current-measure estimates. Returns to R&D capital represent 20 percent of property-type income. Regardless of the alternative assumptions made about R&D service lives, depreciation, lag in benefits, or deflators, R&D is a significant contributor to economic growth, with the contribution of R&D investment ranging from 2 to 7 percent of GDP growth and the contribution of returns to R&D capital ranging from 4 to 15 percent of GDP growth.40
8.4.1 Adjusted NIPA Tables In this section the estimates for selected years in tables 8.4, 8.5, and 8.6 are discussed. Table 8.4 shows adjusted GDP, relevant R&D estimates, and the impact of capitalizing R&D on major subaggregates of GDP. The biggest addition to GDP is R&D business fixed investment. Business R&D investment is 11 to 12 percent of gross private domestic investment (GPDI) except in 1973, when it is only 8 percent of GPDI. In 1973 government R&D investment 40. Rates of growth are computed throughout this paper from endpoint to endpoint. For example, the 1961–2000 rate of growth of adjusted GDP is calculated as ([1996 dollar-adjusted GDP2000 /1996 dollar-adjusted GDP1961] raised to the power [1/{2000-1961}] – 1) 100.
294
Barbara M. Fraumeni and Sumiye Okubo
increased in relative terms and private R&D investment decreased. Government R&D investment is no greater than 25 percent of private R&D investment except in 1973, when it is 31 percent of private R&D investment. Private returns to R&D capital of nonprofit institutions and general government have a small impact on GDP. They amount to less than 1 percent of current-measure GDP. Depreciation, one component of private returns to R&D capital, is commonly larger than the net return, the other component.41 The net return is to the performer only (excludes spillovers), but depreciation is on the total R&D capital stock. The magnitudes for private R&D investment are larger than those for private returns to R&D performed by nonprofit institutions and general government. Table 8.5 shows consumption of R&D capital (depreciation) as well as a number of adjusted aggregates. The magnitudes of adjusted GDP, adjusted gross national product (GNP), adjusted GDI, and adjusted gross national income (GNI) are very similar. Adjusted net national product (NNP) and adjusted net domestic product (NDP) are smaller than the other aggregates, but similar in size to each other as both deduct consumption of fixed capital.42 As GNP is the market value of goods and services produced by labor and property supplied by U.S. residents, regardless of where they are located, adjusted GNP is equal to GDP plus net income from the rest of the world. As previously noted, the difference between GDP and GDI is the statistical discrepancy. Similarly, NNP is equal to NDP and GNI is equal to GDI, both plus net income from the rest of the world. Consumption of R&D capital is an increasing share of GDP for the years shown, doubling from slightly less than 1 percent to almost 2 percent between 1961 and 2000. Table 8.6 focuses on components of GDI. The rates of growth of the 1996 dollar estimates for returns to R&D capital are inputs to the contribution calculations, along with current dollar shares. The share of all returns to R&D in GDP doubled between 1961 and 2000, rising from less than 4 percent to over 7 percent. Returns to business R&D consistently represent the largest component of R&D, averaging close to 80 percent of total returns to R&D in all years shown. 8.4.2 Effects on GDP, GDI, and National Savings Capitalizing R&D affects both the product (GDP) and income (GDI) sides of the national accounts in a double-entry system. It affects estimates 41. The estimation of depreciation is discussed in appendices A and C. A geometric rate of depreciation is employed. 42. In table 8.5 the term “consumption of R&D capital” refers to economic depreciation. In NIPA table 1.9, “consumption of fixed capital” also refers to economic depreciation (ignoring the complication of adjustments for disasters), “capital consumption allowances” refers to tax-based depreciation, and the capital consumption adjustment is equal to the consumption of fixed capital less capital consumption allowances.
R&D in the National Income and Product Accounts Table 8.7 Period
295
National savings rate (%) Adjusted measure
Impact of capitalizing R&D
Current measure
1961–66 1967–73 1974–95 1996–2000
23.7 21.9 20.2 20.3
2.4 2.2 2.0 2.1
21.3 19.7 18.1 18.1
1961–2000
21.0
2.1
18.9
Note: Totals may be off by .1 because of rounding.
of savings, investment, capital stock, and the returns to R&D and propertytype income. Savings, Investment, and Wealth Capitalizing R&D has a significant effect on measures of savings, investment, and wealth. It raises the estimate of investment and, therefore, the estimate of national savings, as well as capital stock. R&D investment and R&D fixed capital stock, the latter of which is an important component of wealth, are large relative to current measures of investment and stock. Business performers account for more than two-thirds of R&D investment and capital stock. Notable period-by-period differences in the growth rate of R&D investment by performing sector may have had, and may continue to have, an effect on economic growth. Table 8.7 shows that capitalizing R&D raises the national savings rate around 2 percentage points. As defined in NIPA table 5.1, the national savings rate is equal to gross investment (the sum of gross private domestic, gross government, and net foreign investment) less the statistical discrepancy, divided by GNP. Although the 1961–2000 rates of growth for real R&D investment and fixed capital stock are similar across all performing categories, growth rates for investment show much more fluctuation by periods than those for fixed capital stocks (see tables 8.8 and 8.9). Other things held equal, stocks will change much more slowly than investment, as current stocks are large relative to investment. Accordingly, it is not surprising that there is less variation in the composition of R&D fixed capital stocks than for R&D investment. For the last three periods—1967–73, 1974–95, and 1996–2000— the average share of R&D fixed capital stocks for business, nonprofit institutions, and general government is almost constant.43 In all but the first period shown (1953–60), the sum of the shares for business and nonprofit 43. When growth rates are calculated, the periods are 1953–61, 1961–66, 1966–73, 1973– 95, and 1995–2000; when shares or contributions are calculated, the periods are 1953–60, 1961–66, 1967–73, 1974–95, and 1996–2000.
296
Barbara M. Fraumeni and Sumiye Okubo
Table 8.8
Rates of growth of real R&D investment and current measure real gross domestic product (%) Real R&D Investment
Total
Business
Nonprofit institutions
General government
Current measure real GDP
1953–61 1961–66 1966–73 1973–95 1995–2000
11 8 1 4 9
12 7 1 5 10
16 14 1 4 7
6 12 3 4 6
3 6 4 3 4
1961–2000 1953–2000
5 6
5 6
5 7
5 5
3 3
Period
Table 8.9
Rates of growth of real R&D fixed capital stock and current measure real gross domestic product (%) Real R&D fixed capital stock
Total
Business
Nonprofit institutions
General government
Current measure real GDP
1961–66 1966–73 1973–95 1995–2000
10 6 4 6
10 5 4 6
17 8 4 5
9 7 4 5
6 4 3 4
1961–2000
5
5
7
5
3
Period
institutions is approximately 80 percent in all periods.44 Only the 1953–60 period shows a significant difference from the typical pattern. The share of the total for business is close to or above 70 percent for both the R&D investment share and the R&D fixed capital stocks share. Returns to R&D and Property-Type Income In the previous section, the focus was on an asset flow (investment and savings) and balance sheet (stock and wealth) account. This section describes the effect of capitalizing R&D on the income side of the accounts. Gross domestic income rises by the same amount as the increase in GDP, as capitalizing R&D increases property-type income.45 The returns to 44. The first year of R&D expenditures data from the NSF R&D database is 1953; see NSF (2001a) or the NSF website at http:www.nsf.gov/sbe/srs/indus/start.htm. 45. Property-type income is defined as the sum of corporate profits, proprietors’ income, net interest, capital consumption allowances, inventory valuation adjustments, rental income of persons, business transfer payments, and surplus of government enterprises, less subsidies.
R&D in the National Income and Product Accounts Table 8.10
297
Share of property-type income in gross domestic income (GDI) and share of returns to R&D in property-type income (%) Share of property-type income in adjusted GDI
Difference adjusted and current measure GDI
Share of property-type income in current measure GDI
Share of returns to R&D in adjusted property-type income
1961–66 1967–73 1974–95 1996–2000
36 34 35 37
1 1 1 1
35 32 34 36
16 21 20 19
1961–2000
35
1
34
20
Period
Notes: Totals may be off by 1 because of rounding. Shares are average current dollar shares.
R&D capital can be separated out from other types of capital, and its share of property-type income can be identified.46 The share of property-type income in GDI increases on average by 1 percentage point per year (see table 8.10) due to capitalizing R&D. The share of returns to R&D in adjusted property-type income is significant, averaging 20 percent. Except for 1961–66, when the share of property-type income in GDI is relatively high, there is little variation in the share of R&D returns in property-type income. 8.4.3 Effects on Variables Used in Growth Analysis: Property-Type Income and Gross Returns to Capital The growth-accounting model provides the basis for estimating the returns to R&D capital and the contribution of R&D to economic growth on the income (supply) side.47 By typically excluding R&D capital, past analyses of sources of economic growth have attributed property-type income to fixed assets other than R&D capital. Accordingly, the rate of return to fixed assets and the contribution of those assets to GDP growth on the income (supply) side have been overstated. Distinguishing the return to R&D capital, from returns to other types of capital, provides a means of determining its size relative to other types of traditionally measured returns to capital and, therefore, R&D’s relative contribution to economic growth. The basic growth accounting model equation is Alternatively, it is GDI less compensation of employees, indirect business tax and nontax liabilities, and the statistical discrepancy. 46. See the “Returns to R&D Capital” section of appendix C for a discussion of how net returns to R&D are estimated. 47. Two types of contributions are estimated in this paper, one for the product (demand) side and the other from the income (supply) side. The contribution from the product (demand) side looks at the contribution of R&D investment to GDP growth. The contribution from the income (supply) side looks at the contribution of returns to R&D to GDP growth.
298
(15)
Barbara M. Fraumeni and Sumiye Okubo
ROG of Q K ROG of K L ROG of L , where K nominal dollar property-type income share rS/pQQ, L nominal dollar labor income share wH/pQQ,
ROG is the abbreviation for rate of growth, Q is real output, K is real capital input, L is real labor input, pQQ is nominal dollar output, and is the rate of productivity change. Equation (15) is revised to include R&D capital as follows: (16)
ROG of Q R&D ROG of KR&D O ROG of KO L ROG of L ,
where the subscript R&D refers to R&D capital and O refers to all other capital. The contribution of R&D to economic growth on the income (supply) side is equal to R&D ROG of KR&D/ROG of Q; the contribution of other assets is O ROG of KO/ROG of Q. Equation (16) of this paper shows the revisions needed in the basic growth accounting model to allow for incorporation of R&D capital. Gross return to capital is defined as property-type income divided by fixed capital stock. Distinguishing R&D fixed capital stock and property-type income from fixed capital stock, other than R&D, and the related property-type income allows for the estimation of gross rates of return for R&D capital, as distinct from all other capital. Property-type income is the same as what would be used in the construction of the alphas, the income shares, in equations (15) and (16). In the current NIPAs, rates of return on capital tend to be overstated because R&D stock is not included in the capital stock denominator, yet most of the return to R&D capital is included in the property-type income numerator. The returns to R&D additions to GDP amount to 1 percent of current-measure GDP, yet R&D fixed capital stock averages 6 percent of current-measure fixed capital stock (tables 8.11 and 8.12). However, the effect on the gross return to total fixed capital stock is small, as the changes are a relatively small percent of the current measure totals.48 8.4.4 Sources of Growth Analysis: Contributions of R&D to Growth Contributions of R&D to growth can be estimated on the product (demand) side and on the income (supply) side. The contribution of R&D investment to growth in adjusted GDP is the product-side number, and the contribution of return on R&D capital to adjusted GDP growth is the income-side number.49 National account estimates using the product-side 48. If inventories and land (including subsoil minerals) were included in the estimate of capital stock, both the current and the adjusted measures of the gross rate of return would be lower. 49. Annual approximate contributions are calculated in this paper as a weighted growth rate, where the weights are the average share in the preceding period and the current period.
R&D in the National Income and Product Accounts Table 8.11
299
Net additions to gross domestic product and R&D totals (as a percent of current dollar, current measure GDP) Net additions to GDP R&D totals
Total
R&D funded and performed by business
Private returns to NP&GG from R&D performed by NP&GG
R&D investment
Returns to R&D
1961–66 1967–73 1974–95 1996–2000
1 2 2 2
1 1 1 2
1 1 1 1
3 3 2 3
6 7 7 7
1961–2000
2
1
1
3
7
Period
Notes: Totals may be off by 1 because of rounding. NP&GG is an abbreviation for nonprofit institutions and general government. Table 8.12
R&D investment and wealth share of existing measures (%) Share R&D fixed investment is of current measure investment
Share R&D fixed capital stock is of current measure fixed capital stock
1961–66 1967–73 1974–95 1996–2000
14 13 13 13
5 6 6 6
1961–2000
13
6
Period
Note: Shares are average current dollar shares.
approach are reported in NIPA tables S.2 and 8.2. Income-side estimates follow the sources of economic growth approach discussed earlier. The contributions are presented in two formats (see table 8.13). The first corresponds to the presentation in the NIPA tables, where the sum of all contributions sum to the rate of growth of GDP. The second takes these same contribution estimates and presents them as a percentage of the rate of growth of GDP, where the sum of all contributions so calculated is 100 percent. The significant contribution of R&D to economic growth should be recognized, whether viewed from the product side or the income side. In the base case scenario, for 1961–2000, the contribution of R&D investment to For example, the contribution of R&D investment to growth in adjusted GDP is calculated as .5 (nominal dollar R&D investmentt–1/nominal dollar-adjusted GDPt–1 nominal dollar R&D investmentt /nominal dollar-adjusted GDPt) ([real R&D investmentt /real R&D investmentt–1] – 1) 100. An average of the annual contributions is then calculated and reported in table 8.13.
300
Barbara M. Fraumeni and Sumiye Okubo
Table 8.13
Contribution of R&D investment and return to R&D capital to growth in GDP (%) R&D investment
Return on R&D capital
Summing to GDP growth rate
As a percentage of GDP growth rate
Summing to GDP growth rate
As a percentage of GDP growth rate
1961–66 1967–73 1974–95 1996–2000
.22 .03 .11 .22
4 1 4 5
.62 .43 .29 .37
11 12 10 9
1961–2000
.12
4
.38
11
Period
growth in adjusted GDP averages 4 percent, and the contribution of return on R&D capital to growth in adjusted GDP averages 11 percent (table 8.13).50 In the alternative scenarios presented and discussed in appendix A, for 1961–2000, the contribution of R&D investment to growth in adjusted GDP averages from a low of 2 percent to a high of 7 percent (table 8A.1), and the contribution of return on R&D capital to growth in adjusted GDP averages from a low of 4 percent to a high of 15 percent (table 8A.6). The period-to-period fluctuation in the contribution of R&D investment reflects mainly the variation in the growth rate of R&D investment (see table 8.8), rather than variation in the rate of growth of adjusted GDP. The lesser period-to-period fluctuation in the contribution of returns to R&D capital reflects the smaller variation in the rate of growth of R&D fixed capital stock (compare estimates in table 8.8 to those in table 8.9). 8.5 Conclusions and Future Research Construction of the partial R&DSA within a NIPA framework allows for the estimation of the impact of R&D on GDP and other macroeconomic aggregates as well as the estimation of the contribution of R&D to economic growth using a sources-of-economic-growth approach. The sources of economic growth approach is based on growth-accounting models, which have been used to analyze the relationship between output and inputs in production and to determine the contribution of inputs, including R&D.51 They are part of a rich tradition of examining the sources of economic growth, including productivity growth, as epitomized by the work of Edward F. Denison, John W. Kendrick, Dale W. Jorgenson and his 50. Griliches (1973, p. 78) estimates the product-side contribution of R&D to GDP growth to be .34 percent as of 1966, probably considerably less. Our estimate of this contribution is .22 for the 1961–66 period (see table 8.13). 51. See op cit., Solow (1957), and OECD (2001, annex 3).
R&D in the National Income and Product Accounts
301
coauthors, and others such as Stephen D. Oliner and Daniel E. Sichel.52 R&D expenditures have been listed as a possible cause of productivity growth in the attempts to identify the factors behind the so-called Solow residual.53 Substantial additional work is needed to determine the effect of R&D on GDP. Estimates provided in this paper depend on assumptions made about the rates of return, depreciation rates, service lives, deflators, gestation, and application lags. The reasonableness of these assumptions needs to be assessed. Each of these factors may have varied over time as the composition of R&D expenditures by performers has changed and the nature of technical change itself has changed. Also, further work is needed to determine whether the pattern of returns to R&D, both private and spillover, has varied over time or has remained constant. The pattern of returns may vary over the lifetime of a specific asset, may certainly vary from one investment to another, and may vary over time, for different vintages of R&D investment. Without a means of gauging these kinds of changes, assessing the effect of R&D on GDP is difficult. In addition, rates of return that may be appropriate for private R&D may not be appropriate for government R&D. Despite these remaining questions, this exploratory paper is a first step in improving our understanding of the contribution of R&D to growth. It shows how a national income accounting methodology can be used to examine the role of R&D and how capitalization of R&D expenditures might affect GDP. When the System of National Accounts (1993) was revised, one of the last decisions made was not to capitalize R&D expenditures. This decision is being revisited by a number of national income accountants and is an area in which the BEA could again demonstrate that it is a world leader in statistical innovations. One only needs to look as far as the adoption of a quality adjusted computer price index and chain indexes and, most recently, the capitalization of software to understand the important role that the BEA has played.
Appendix A Alternative Scenarios Since much is unknown about R&D, such as the appropriate deflators, depreciation rates, lengths of gestation and application lags, and spillover 52. See Denison (1985), Kendrick (1973), Jorgenson, Gollop, and Fraumeni (1987), Jorgenson and Stiroh (2000), Oliner and Sichel (2000), and Jorgenson (2001). 53. See Denison (1979, pp. 122–27) and Kendrick and Grossman (1980, pp. 10, 16–18, and chap. 6, pp. 100–111).
302
Barbara M. Fraumeni and Sumiye Okubo
gross rates of return, a number of alternative scenario is analyzed using different assumptions about the deflators, depreciation rates, lag structure, and the spillover gross rates of returns. The estimates under these alternative scenarios are compared to the results in the main part of the paper to gauge the significance of the contribution of R&D. These estimates highlight their preliminary nature because of the uncertainty about many aspects of R&D. In the tables and discussion that follow, two alternative deflators are employed. The deflator used in the main body of the paper is the private fixed nonresidential investment chain-type price index from NIPA table 7.6. The alternative deflators are the overall R&D deflator used in the previous BEA R&D study (Carson, Grimm, and Moylan 1994) extended beyond 1992 with a GDP deflator and the information processing equipment and software chain-type price index from NIPA table 7.6.54 Two alternative depreciation rates are employed for business R&D. The geometric depreciation rate used in the main body of the paper for all R&D from the 1994 BEA study is 11 percent. The alternative geometric depreciation rates for business R&D are 20 percent in all years and a rate that increases gradually from 10 percent in 1961 to 20 percent in 2000.55 The latter rates take into account the rise in the R&D in information technology (IT) relative to other industries56 and the increased pace of technological change in information technologies that has reduced the life of R&D capital, especially semiconductor technology. Semiconductor technology is an important component of IT and many other products, and has experienced increasingly rapid rates of obsolescence, as reflected in the steeply falling prices of semiconductor devices. The declining balance rate that is assumed determines the service life of R&D capital. In the 1994 BEA study, the depreciation rate was picked as opposed to a service life. With a double declining balance rate, an 11 percent rate corresponds to an eighteen-year service life; with a 1.65 declining balance rate (the current BEA default for equipment), the service life is fifteen years; and with a .91 declining balance rate (the current BEA default for structures), the service life is eight years.57 For a 20 percent depreciation rate, similar declining balance rates have corresponding service lives of 10 years, 8 years, and 4.5 years. Only the business R&D depreciation rate is varied because of the following reasons: 54. The BEA 1994 study R&D deflator is very similar to the GDP deflator, particularly in the later eighties and nineties. 55. The alternative R&D depreciation rate assumptions in BLS Bulletin 2331 (BLS 1989) are 10 percent and 20 percent. 56. See Hall, Jaffe, and Trajtenberg (2001, p. 13 and fig. 5). 57. The default rates are given in Fraumeni (1997). The formula for a geometric rate of depreciation is the declining balance rate divided by the service life. Accordingly, the service life can be derived as the declining balance rate divided by the geometric rate of depreciation (Fraumeni 1997, p. 11).
R&D in the National Income and Product Accounts
303
• First, the gross rate of return to nonprofit institutions and general government performers is assumed to be two-thirds of the business rate, or 16.7 percent. A depreciation rate of 20 percent implies a long-term negative 3.3 percent net return to R&D capital. This is not a reasonable assumption unless nonprofit institutions and general government are undertaking R&D because the social rate of return is positive, even though the net return to themselves is negative. • Second, business R&D is heavily concentrated in development; the same is not true for nonprofit institutions or general government. Because development investment is generally believed to have a shorter service life than either basic or applied research, there is a rationale for lowering the average service life for all business R&D without doing the same for nonprofit institutions and general government. For both types of contributions—returns to investment and returns to R&D—the private fixed nonresidential investment deflator used in this study produces contribution estimates that lie between those of the alternative deflators. The BEA 1994 study GDP deflator produces the lowest estimates, and the information-processing equipment and software deflator produces the highest estimates in all time periods. For 1961–2000, the three alternative deflators produce contributions of R&D investment to growth in adjusted GDP estimates of 2 percent, 4 percent, and 7 percent (see table 8A.1). For 1961–2000, the contribution of returns to R&D capital to growth in adjusted GDP estimates are 9 percent, 11 percent, and 15 percent (see table 8A.2). Because changing the R&D depreciation rate has no effect on investment, only estimates of the contribution of return on R&D capital to adjusted GDP growth are presented for the alternative depreciation rate assumptions (table 8A.3). The depreciation rate assumption of 11 percent used in the main body of the paper consistently produces contribution estimates that are higher than either of the two alternatives, with the varying depreciation rate being the middle estimate and the 20 percent deprecia-
Table 8A.1
Contribution of R&D investment to growth in adjusted GDP alternative deflators (as a percent of GDP growth rate)
1994 BEA GDP deflator
Gross private fixed nonresidential investment deflator
Information processing equipment and software deflator
1961–66 1967–73 1974–95 1996–2000
3 0 3 3
4 1 4 5
5 4 8 9
1961–2000
2
4
7
Period
304
Barbara M. Fraumeni and Sumiye Okubo
Table 8A.2
Contribution of return to R&D capital to growth in adjusted GDP alternative deflators (as a percent of GDP growth rate)
1994 BEA GDP deflator
Gross private fixed nonresidential investment deflator
Information processing equipment and software deflator
1961–66 1967–73 1974–95 1996–2000
10 10 8 6
11 12 10 9
12 14 17 12
1961–2000
9
11
15
Period
Table 8A.3
Contribution of return to R&D capital to growth in adjusted GDP alternative rates of depreciation (as a percent of GDP growth rate)
11% depreciation rate
Varying depreciation rate from 10% to 20%
Constant 20% depreciation rate
1961–66 1967–73 1974–95 1996–2000
11 12 10 9
10 10 7 5
7 7 7 6
1961–2000
11
8
7
Period
tion rate alternative being the lowest estimate. These results are not surprising, because raising the depreciation rate lowers the 1996 dollar value of R&D fixed capital stock. The total return on those stocks falls by the same percentage as the decrease in the stocks. Three alternative lag structures are tested. The lag used in the main body of the paper is a one-year lag that reflects an average lag applied to all categories of R&D expenditures and follows the assumption used in the earlier BEA estimates of R&D capital. Past studies have identified two types of lags: gestation lags and application lags. Gestation lags refer to the time needed to complete an R&D project, and application lags, to the time between completion of the R&D and its initial commercialization. Past research has found that the gestation lags range between one and two years and that application lags range from less than one year to two years (Carson, Grimm, and Moylan 1994, p. 44).58 A one-year lag assumption takes into account only the gestation period. To take into account gestation and application lags, alternative assumptions are a three-year and five-year lag, and a seven-year lag that takes into account lagged impacts on profits as well. 58. See also BLS (1989, pp. 6–7, 19–21) for a discussion of studies that look at the lag between research and profits and productivity growth.
R&D in the National Income and Product Accounts
305
The 1961–2000 average contribution of return to R&D capital to growth in adjusted GDP for the alternative lags is at most 1 to 2 percentage points lower than in the one-year lag scenario (table 8A.4). The contribution estimates vary by period, reflecting the impact of a longer lag. Not surprisingly, the 1967–73 contributions in the three- and five-year lag scenarios are higher than in the one-year lag scenario, reflecting the delayed impact of the high rates of growth of R&D expenditures in 1953–1961 and 1961– 1966 (see table 8.8). The spillover gross rate of return may be lower than the 25 percent for business and 16.7 percent for nonprofit institutions and general government assumed in the main body of the paper. Two alternative scenarios are examined (see table 8A.5) and compared to those presented in the main body of the paper. The first assumes a spillover rate of return of 12.5 percent for R&D performed by business and 8.3 percent for R&D performed by nonprofit institutions and general government, and the second, a spillover gross rate of return that varies from 25 percent to 12.5 percent for R&D performed by business and from 16.7 percent to 8.3 percent for R&D performed by nonprofit institutions and general government. The contributions of return to R&D capital to growth in adjusted GDP are highest for the assumptions used in the main body of the paper, ranging from 9
Table 8A.4
Period
Contribution of return to R&D capital to growth in adjusted GDP alternative lags (as a percent of GDP growth rate) One Year Lag
Three Year Lag
Five Year Lag
Seven Year Lag
1961–66 1967–73 1974–95 1996–2000
11 12 10 9
9 13 9 7
8 13 9 7
6 12 9 7
1961–2000
11
10
9
9
Table 8A.5
Contribution of return to R&D capital to growth in adjusted GDP alternative spillover gross rates of return (as a percent of GDP growth rate) Constant at 25% for business, 16.7% for NP&GG
Varying from 25% to 12.5% for business, 16.7% to 8.3% for NP&GG
Constant at 12.5% for business, 8.3% for NP&GG
1961–66 1967–73 1974–95 1996–2000
11 12 10 9
10 10 7 6
8 9 8 7
1961–2000
11
8
8
Period
Note: NP&GG is an abbreviation for nonprofit institutions and general government.
306
Barbara M. Fraumeni and Sumiye Okubo
percent to 12 percent for the periods. The alternative scenario contributions are at most 3 percentage points lower. These results initially seem surprising—the contribution for the lowest assumed spillover gross rate of return is sometimes higher than that for the scenario in which the spillover gross rate of return varies. However, even though in nominal dollars the spillover return is higher in the “varying” scenario, as the spillover gross rate of return falls, the rate of growth of constant dollar return to R&D capital is lower in the scenario with varying returns than in the scenario with constant 12.5 percent, 8.3 percent spillover gross rates of return. Whether the contribution in the varying scenario is lower or higher than the constant scenario depends upon the relative effect of the higher current dollar weight in the varying scenario versus the lower rate of growth of 1996 dollar return to R&D capital in the varying scenario. Table 8A.6 presents the full range of the possible magnitudes of the contribution of return to capital to growth in adjusted GDP looking at all possible permutations of assumptions. It shows that, regardless of the assumptions used, the contribution of R&D to economic growth is very significant. The lowest contribution for 1961–2000 is 4 percent, and the highest is 15 percent. For the base case, with the assumptions used in the main body of the paper, the contribution is in the middle range at 11 percent. Using the information-processing equipment and software deflator alone accounts for the 4 percentage point increase of the contribution from the base case to the highest contribution case. A higher depreciation rate is the main factor that lowers the contribution from the base case; however,
Table 8A.6
Contribution of return to R&D capital to growth in adjusted GDP alternative scenarios: Lowest contribution, base case, and highest contribution (as a percent of GDP growth rate)
Scenario
Lowest
Base case
Highest
Deflator
1994 BEA GDP deflator
Depreciation
Constant 20% depreciation rate Seven year Constant at 12.5% for business, 8.3% for NP&GG
Gross private fixed nonresidential investment deflator 11% depreciation rate
Information processing equipment and software deflator 11% depreciation rate
One year Constant at 25% for business, 16.7% for NP&GG
One year Constant at 25% for business, 16.7% for NP&GG
Lag Spillover gross rate of return Period 1961–66 1967–73 1974–95 1996–2000 1961–2000
4 6 4 3 4
11 12 10 9 11
Note: NP&GG is an abbreviation for nonprofit institutions and general government.
12 14 17 12 15
R&D in the National Income and Product Accounts
307
lowering the spillover gross rate of return is also a factor. Other variations in assumptions are less important.
Appendix B 1995–2000 Compared with 1961–66 Although the 1961–66 and 1995–2000 periods appear to be similar, there are some notable differences. The 1961–66 period, often viewed as a heyday of U.S. economic and productivity growth, and the 1995–2000 period are similar in terms of the annual rate of growth of R&D, 8 percent and 9 percent, respectively, and an above-average annual rate of growth of GDP, 4 percent and 6 percent, respectively. The differences relate to who funded and performed R&D, the type and composition of R&D being performed, and growth in R&D expenditures leading up to the period. In 1995–2000, acceleration in business R&D is the major catalyst for the high overall rate of growth, whereas in 1961–66, it is nonprofit institutions and general government (table 8.8).59 Business performed the bulk of R&D in both periods, but the 2 percentage points increase in the share of R&D performed by business in 1996–2000 compared to 1961–66 is matched by an equivalent decrease in the share of R&D performed by general government. In the 1960s and early 1970s, general government on net funded the bulk of total R&D, especially for defense and the space race.60 Its share has declined steadily, from 66 percent of the total in 1961–66 to 32 percent in 1996–2000. Moreover, while general government funding of nonprofit institutions’ R&D has been relatively stable at about 7 percent of total R&D, government’s support of R&D performed by business has fallen from over 40 percent of total R&D in the 1961–66 period to 10 percent in 1996–2000. This decline in general government financing of R&D performed by business reflects, in part, the de-emphasis in government programs on defense R&D.61 Another difference between the two periods can be attributed to the rise in the importance of information technology (IT) R&D.62 59. However, note that there is a significant shift between 1953–61 and 1961–66. In the earlier period, the rates of growth for R&D performed by business and nonprofit institutions are significantly higher than the rate of growth for R&D performed by general government. In addition, the drop between 1953–60 and 1961–66 in the share of R&D performed by business is almost equal to the increase in the share of R&D performed by nonprofit institutions. 60. Funding of general government R&D by others is subtracted from general government R&D expenditures, including funding of others, to arrive at net funding by general government. 61. Table 3.11 in NIPA shows little growth (1 percent between 1982 and 2000) in government expenditures for defense R&D. Defense R&D peaked in 1987 (National Science Board 1998). 62. The increase in the importance of IT R&D relative to R&D in industries other than IT is reflected in patent data. These data indicate that the shares of total patent applications for
308
Barbara M. Fraumeni and Sumiye Okubo
The nature of R&D performed by businesses, nonprofit institutions, and general government has also changed over the two periods. In the past, according to NSF data, nonprofit institutions and the federal government performed the bulk of basic research (that is, work undertaken to acquire new knowledge without any particular application in mind), accounting for about 75 percent of basic research in 1961–66.63 The share of total basic R&D performed by the federal government has declined from 15 percent in 1961 to 7 percent by 2000, while that of business rose from 26 percent in 1961 to 34 percent of total basic R&D in 2000, and that of nonprofit institutions has stayed relatively constant (around 60 percent).64 The runup in R&D investment in 1953–60 (see table 8.8) and the slow rate of growth in R&D investment in the years just preceding 1995 are reflected in the growth of real R&D stock; real R&D fixed capital stock grew at a record rate in 1961–66, but not in 1995–2000 (see table 8.9). As the benefits from R&D investment occur over a number of years, it is highly likely that we will be enjoying the fruits of the R&D mainly undertaken by business in the second half of the nineties through the first decade of the new millennium. An important consideration in this story is whether the service life of R&D has shortened since the sixties or, equivalently, whether the obsolescence rate of R&D has increased. The sensitivity of our results to our service life assumptions is discussed in appendix A. R&D investment represents 13 percent of current fixed investment; R&D fixed capital stock is less than half that in percentage terms at 6 percent of current fixed capital stock, reflecting the shorter service life of R&D compared to the average service life of all fixed assets currently included in the national accounts (see table 8.12).
computers and communications have risen steeply, from 5 percent in the 1960s to 20 percent in the late 1990s, and applications for electrical items and electronics had a steady share at 16– 18 percent. Shares of the three traditional fields (chemical, mechanical, and others) declined from 76 percent in 1965 to 54 percent in 1997. See Hall, Jaffe, and Trajtenberg (2001, p. 13 and fig. 5). 63. The other two categories of R&D are “applied research,” aimed at gaining the knowledge to meet a specific recognized need, and “development,” which is the systematic use of the knowledge gained from research directed toward the production of useful materials, devices, systems, or methods. See National Science Board (1998, pp. 4–9). 64. See National Science Board (1998, chap. 4) and National Science Foundation (2001a). The government and nonprofit institutions’ sectoring used by the NSF differ from those in the rest of this paper. The relevant NSF categories are federal government, universities and colleges, and other nonprofit institutions. In this paper, following Carson, Grimm, and Moylan (1994), R&D expenditures by public universities and colleges are allocated to general government. In this paragraph when the term “federal government” is used, 1994 BEA definitions are being employed, including those for nonprofit institutions.
R&D in the National Income and Product Accounts
309
Appendix C Technical Appendix This technical appendix provides additional information about the construction of the estimates of R&D investment, R&D capital stock, and returns to R&D capital, as well as the deflation and aggregation methodologies employed. Table 8.2 lists the base case scenario assumptions. The alternative scenarios are described in appendix A. This technical appendix provides additional details of the base case scenario and describes how alternative scenarios are constructed. R&D Investment and Capital Stock The original BEA investment and capital stock estimates were updated through 2000 for this paper based on NSF expenditure data.65 The NSF nominal dollar expenditure data from 1992 are adjusted for differences in the levels and composition of BEA and NSF R&D expenditures using a regression approach. For the three performer categories, simple linear regressions of the BEA categories against a constant time trend and nearest comparable NSF category are fitted for 1953–1992. A time trend times NSF data interaction term is included in the general government performer equation as the associated coefficient is significant. In all cases, the adjusted R-squared is above .9, and all coefficients are highly significant. The results from the fitted equation are used to forecast what BEA values would have been for 1993–2000. This is a simplified approach; in the earlier project a number of specific adjustments were made to the NSF data.66 One basic difference between the BEA and NSF R&D data is the allocation of R&D expenditures by public colleges and universities. The BEA allocates these expenditures to government, while the NSF allocates these expenditures to nonprofit institutions. The NSF data only identify R&D expenditures by the federal government, not by state and local governments. The NSF’s R&D expenditures prior to 1992 are not directly comparable to those from 1992–2000 because of a change in sample design and survey methodology.67 Surveys after 1992 provide more accurate and betterquality data because they reflect the current year distribution of companies by size and industry, changes in industry classification systems, and changes 65. See National Science Foundation (2001a). 66. In the earlier project, statistical adjustments made included those for timing and geographic coverage and to fill in missing data for some industries for some years (see Carson, Grimm, and Moylan 1994, p. 42). These adjustments raised the nominal dollar level of R&D expenditures in most years above those reported by the National Science Foundation (NSF), by at most 3 percent. In a few years, 1961–64, the nominal dollar expenditures are very slightly lower than those reported by NSF, at most by .4 percent. The level of the estimated 1993–2000 R&D data is always above the level of the NSF data, again by at most 3 percent. 67. See NSF (2001b, p. 10).
310
Barbara M. Fraumeni and Sumiye Okubo
in the way industry classifications are assigned. There is no way of knowing how large the differences might be and no expectation that this can be accurately determined by judging from the attempt by, but failure of, NSF staff to create a consistent time series. However, the potential problem is reduced by using aggregate data instead of industry data or data separated out by type of R&D: basic, applied, and development. Very little information is available to estimate imports or exports of R&D services. The NSF’s data on business performance of R&D includes R&D funded by foreign entities. What little data exist to break out foreign funding of R&D performed in the United States or U.S. funding of R&D performed abroad are for unaffiliated services and come from the BEA. Estimates based on these data indicate that these imports and exports each represent well under 0.5 percent of total R&D expenditures in the United States during the 1986–2000 period.68 Accordingly, no attempt is made to estimate the magnitude of these R&D services and the spillover from R&D performed abroad, or to gauge whether the spillover rates of return reflect only spillovers to U.S. businesses, excluding spillovers to foreign entities. R&D expenditures are assumed to enter the capital stock with a oneyear lag. With this convention, expenditure in one year becomes investment of the following year, and there is an entry for change in R&D-inprogress. The investment equation is as follows: (C1)
GDP total investmentt expenditurest1 change in R&D-in-progresst expenditurest,
where by definition: (C2)
Change in R&D-in-progresst expenditurest expenditurest1,
and (C3)
Completed R&Dt expenditurest1
As previously noted, the base case scenario uses a geometric depreciation rate of 11 percent to update the capital stock. In the earlier BEA project, straight-line depreciation was combined with a Winfrey (bell-shaped) retirement distribution to construct the BEA R&D capital stocks because this methodology was used at the time to construct BEA estimates of fixed tangible capital stocks. The R&D service life was adjusted to mimic a target 11 percent geometric rate of depreciation since this rate was approxi68. Data are available for research, development, and testing services for unaffiliated services. See BEA (2001, table 1, pp. 64–65). R&D-affiliated services data are available from 2002 (BEA 2003).
R&D in the National Income and Product Accounts
311
mately the midpoint of then-available estimates of R&D depreciation rates. The previous project compared estimates using the straight-line/ Winfrey methodology and an 11 percent geometric rate and found that the differences were “modest.”69 Two alternative depreciation rate scenarios are developed because R&D service lives may have shortened over time given the general overall increase in the rate of technical advance as well as a compositional shift in R&D expenditures over the decade of the 1990s.70 The latter effect is reflected by the increasing share of GDP expenditures devoted to products, such as personal computers, with relatively short life spans, and away from products such as pharmaceuticals with a seventeen-year patent life. In addition, obsolescence-related depreciation rates may increase as the level of R&D expenditures rise and the pace of technical change quickens.71 However, the magnitude and timing of a possible shortening of service lives are difficult to measure. The capital stock equation is (C4)
Capital stockt expenditurest1 (1 depreciation rate) capital stockt1.
Returns to R&D Capital A simplified capital service flow equation is used in this paper to estimate returns to R&D capital; all tax terms are ignored. (C5)
Return net return depreciation,
or (C6)
Returnt net rate of return capital stockt1 depreciation rate capital stockt1,
where the rate of return is held constant for each scenario over all years but varies depending upon whether a private, spillover, or social rate of return is employed. Ignoring the tax terms (such as those that would reflect the 69. See Carson, Grimm, and Moylan (1994, p. 45 and box, p. 48) for a comparison for selected years. 70. The NSF’s R&D expenditure data by industry are extremely limited. Data for office, computing, and accounting machines (OCAM—the computer category) are only available for 1972–80, 1993–94, and 1997–98. Data for drugs and medicine and for machinery (the latter being the category that includes OCAM) are available for most years from 1953 forward. Analysis of these data shows that the share of R&D devoted to drugs and medicine rose from 1961 to 1998, while that for machinery may have fallen since the mid-eighties. No data are available for nonmanufacturing industries, including service industries, until 1995. See NSF (2001a), “Total (company, federal, and other) funds for industrial R&D performance, by industry and by size of company: 1953–98.” 71. This was suggested by Adam Jaffe in his comments at the presentation of Fraumeni and Okubo (2001).
312
Barbara M. Fraumeni and Sumiye Okubo
expensing of many R&D costs and the taxation of profits from R&D) on average tends to underestimate business returns to R&D. Tax terms are not an issue for nonprofit institutions and general government. Thus, since only nonprofit institutions and general government private returns to R&D capital are a return net addition to GDP (see table 8.11), appendix equation (C6) provides a good approximation of additions to GDP due to R&D capitalization. Ideally, the equation should be revised to include taxes to adjust the estimates of the return to business R&D capital and the contribution of that capital to GDP growth. The prior BEA project concluded that gestation lags range from one year to two years, that application lags range from something less than one year to somewhat more than two years, and that lags between the investment and its peak effect on profits may be long, particularly for basic research (Carson, Grimm, and Moylan 1994, p. 44). The application lags may have shortened over the 1959–2000 time period because of the quickening pace of technical change in the past decade and shifts in composition of industry R&D expenditures. However, we lack empirical evidence to support a specific lag form. No attempt is made to adjust for variation in the return to R&D or the time pattern of industry returns to R&D capital. The issue of the peak impact on profits would be moot if the age distribution and composition of R&D capital stock were constant over time—an unlikely case. In this paper, the alternative scenario lengthens the overall lag expenditure to capital stock lag from one to seven years. Deflation and Aggregation The base case deflator and one alternative deflator, the information processing equipment and software deflator, are NIPA deflators. The other alternative deflator is the BEA 1994 study deflator until 1992, then a slightly modified GDP (NIPA table 7.1, chain-type price index) deflator for all subsequent years.72 From 1987 to 1992 the BEA 1994 study deflator and the GDP deflator are almost identical; from 1974 to 1992, the deflators are very similar except for a couple of years around 1980. The BEA 1994 deflator is extended through 2000 using the GDP deflator growth rate. From 1959 to 1988, except for a few years around 1980, the Griliches-Hall-Jaffe deflator73 and the BEA 1994 study deflator are almost identical. The information processing equipment and software deflator is chosen as an alter-
72. No attempt is made to update the BEA R&D deflator beyond 1992 for two reasons. First, the GDP deflator and the BEA 1994 study deflator are almost identical from 1987 to 1992. Second, updating the BEA 1994 study deflator would require a substantial effort. The BEA 1994 study deflator was estimated by deriving the cost of inputs at the lowest level of detail available, then matching these costs as closely as possible with proxy prices. 73. The deflator is from table 3.1 of Hall (1990) and is described on page 20 of that source. It is constructed using methodology similar to Jaffe (1972). In BLS (1989, p. 45) the deflator based on the 1972 methodology is called the Jaffe-Griliches deflator.
R&D in the National Income and Product Accounts
313
native deflator to check the sensitivity of our results to use of a deflator that behaves very differently from either of the other two scenario deflators. The same scenario-specific deflator is used to deflate R&D investment, stock, and returns to R&D. Additive aggregation is used when creating R&D totals because there are no differences in the underlying deflator. A chain index number formula is used to aggregate across estimates— say, consumption and investment—with different underlying deflators, unless a component is negative. For example, if GDP is equal to the sum of investment (I) and consumption (C), the rate of growth of aggregate 1996 dollar adjusted GDP is calculated as (C7)
nominal dollar It1 .5 nominal dollar-adjusted GDPt1
nominal dollar It real It 1 nominal dollar-adjusted GDPt real It1
nominal dollar Ct1 .5 nominal dollar-adjusted GDPt1
nominal dollar Ct real Ct 1, nominal dollar-adjusted GDPt real Ct1 a methodology parallel to that used to estimate contributions of R&D to growth (see note 49). The growth rates are then used to extend the real adjusted GDP series before and after 1996, the base year. Additive aggregation is used when a component is negative.
References Arora, Ashish, Marco Ceccagnoli, and Wesley M. Cohen. 2002. R&D and the patent premium. Working paper presented at the National Bureau of Economic Research Productivity Program. 15 March, Cambridge, Massachusetts. Bernstein, Jeffrey, and M. Ishaq Nadiri. 1988. Interindustry spillovers, rates of return, and production in high-tech industries. American Economic Review Papers and Proceedings 78:429–34. ———. 1991. Product demand, cost of production, spillovers, and the social rate of return to R&D. NBER Working Paper no. 3625. Cambridge, MA: National Bureau of Economic Research. Bureau of Economic Analysis (BEA). 1988. Government Transactions, Methodology Papers: U.S. National Income and Product Accounts BEA-MP-5 Washington, DC: U.S. Department of Commerce, November. ———. 2000. Recognition of business and government expenditures for software as investment: Methodology and quantitative impacts, 1959–1998. Paper pre-
314
Barbara M. Fraumeni and Sumiye Okubo
sented to the BEA Advisory Committee, May 5. http://www.bea.doc.gov/bea/ about/software.pdf. ———. 2001. U.S. international services: Cross-border trade in 2000 and sales through affiliates in 1999. Survey of Current Business 81 (November): 49–95. ———. 2003 U.S. international services: Cross-border trade in 2002 and sales through affiliates in 2001. Survey of Current Business 83 (October): 58–118. Bureau of Labor Statistics (BLS). 1989. The impact of research and development on productivity growth. Bulletin no. 2331. Washington, DC: U.S. Department of Labor, September. Carson, Carol S., Bruce T. Grimm, and Carol E. Moylan. 1994. A satellite account for research and development. Survey of Current Business 74 (November): 37–71. Commission of the European Communities/Eurostat, the International Monetary Fund, the Organization for Economic Cooperation and Development, the Statistics Division of the former Department for Economic and Social Information and Policy Analysis and the regional commission of the United Nations Secretariat, and the World Bank. 1993. System of national accounts 1993. New York: United Nations. Council of Economic Advisors (CEA). 1995. Supporting R&D to promote economic growth. http://www.whitehouse.gov/WH/EOP/CEA/econ/html/econ-rpt.html. Denison, Edward F. 1979. Accounting for slower economic growth, the United States in the 1970s. Washington, DC: Brookings Institution Press. ———. 1985. Trends in American economic growth, 1929–1982. Washington, DC: Brookings Institution Press. Economic Report of the President. 1995. Washington, DC: U.S. Government Printing Office, February. Eisner, Robert. 1989. The total incomes system of accounts. Chicago: University of Chicago Press. Fraumeni, Barbara M. 1997. The measurement of depreciation in the U.S. national income and product accounts. Survey of Current Business 77 (July): 7–42. Fraumeni, Barbara M., and Okubo, Sumiye. 2000. Using national accounts to assess the role of R&D in the U.S. economy. In R&D and the knowledge-based society: Linking the production, dissemination, and application of research, ed. Mu Rongping and W. A. Blanpied, 154–67. Beijing: Science Press. ———. 2001. R&D satellite accounts and returns to private R&D. Paper presented at the Allied Social Science Association Meetings. 7 January, New Orleans, LA. ———. 2002. R&D in the national income and product accounts: A first look at its effect on GDP. Paper presented at the National Bureau of Economic Research conference, Washington, DC, April 26–27. http://www.bea.gov/bea/papers.htm. Goto, Akira, and Kazuyuki Suzuki. 1989. R&D capital, rate of return on R&D investment, and spillover of R&D in Japanese manufacturing industries. Review of Economics and Statistics 71 (November): 555–64. Griliches, Zvi. 1973. Research expenditures and growth accounting. In Science and technology in economic growth, ed. B. R. Williams, 59–94. New York: Halstead Press. ———. 1992. The search for R&D spillovers. Scandinavian Journal of Economics 94 (suppl.): S29–47. ———. 1995. R&D and productivity: Econometric results and measurement issues. In Handbook of the economics of innovation and technological change, ed. Paul Stoneman, 52–89. Cambridge, MA: Blackwell. Hall, Bronwyn H. 1990. The manufacturing sector master file: 1959–1987. NBER Working Paper no. 3366. Cambridge, MA: National Bureau of Economic Research, May.
R&D in the National Income and Product Accounts
315
Hall, Bronwyn H., Adam Jaffe, and Manuel Trajtenberg. 2001. The NBER patent citations data file: Lessons, insights, and methodological tools. NBER Working Paper no. W8498. Cambridge, MA: National Bureau of Economic Research, October. Jaffe, Sidney A. 1972. A price index for deflation of academic R&D expenditures. NSF 72-310. Washington, DC: National Science Foundation. Joint Economic Committee, U.S. Congress. 1999. The growing importance of industrial R&D to the U.S. economy. Joint Economic Committee Staff Report. Washington, DC: U.S. Government Printing Office, June. Jorgenson, Dale. 2001. Information technology and the U.S. economy. American Economic Review 91 (March): 1–32. Jorgenson, Dale, Frank Gollop, and Barbara Fraumeni. 1987. Productivity and U.S. economic growth. Cambridge, MA: Harvard University Press. Jorgenson, Dale, and Kevin J. Stiroh. 2000. Raising the speed limit: U.S. economic growth in the information age. Brookings Papers on Economic Activity, Issue no. 1:125–236. Kendrick, John W. 1973. Postwar productivity trends in the United States, 1948– 1969. New York: National Bureau of Economic Research, distributed by Columbia University Press. ———. 1976. The formation of stocks of total capital. New York: National Bureau of Economic Research. Kendrick, John W., and Elliott S. Grossman. 1980. Productivity in the United States, trends and cycles. Baltimore: Johns Hopkins University Press. Lum, Sherlene K. S., and Brian C. Moyer. 2001. Gross domestic product by industry for 1998–2000. Survey of Current Business 81 (November): 17–33. Mansfield, Edwin, John Rapoport, Anthony Romeo, Samuel Wagner, and George Beardsley. 1977. Social and private rates of return from industrial innovations. Quarterly Journal of Economics 91:221–40. Nadiri, M. Ishaq. 1993. Innovations and technological spillovers. NBER Working Paper no. 4423. Cambridge, MA: National Bureau of Economic Research, August. http://papers.nber.org/papers/W4423. Nakamura, Leonard. 2001. Investment in intangibles: Is a trillion dollars missing from GDP? Federal Reserve Bank of Philadelphia Business Review (Q4):27–37. National Science Board (NSB). 1998. Science and engineering indicators, 1998. Washington, DC: U.S. Government Printing Office. National Science Foundation (NSF). 2001a. National patterns of R&D resources: 2000, data update (current to March 2001). Science Resources Studies. www.nsf .gov/sbe/srs/nsf01309/start.htm. ———. 2001b. National patterns of R&D resources: Technical note. Science Resources Studies. Washington, DC: NSF. Oliner, Stephen D., and Daniel E. Sichel. 2000. The resurgence of growth in the late 1990s: Is information technology the story? Finance and Economics Discussion Series Paper no. 2000-20. Washington, DC: Federal Reserve Board. Available at http://www.federalreserve.gov/pubs/2000/20020/200020pap.pdf. Organization for Economic Co-operation and Development (OECD). 2001. OECD productivity manual: A guide to the measurement of industry-level and aggregate productivity growth. Paris: OECD, March. Parker, Robert P., and Eugene P. Seskin. 1997. Annual revision of the national income and product accounts. Survey of Current Business 77 (August): 6–35. Scherer, Frederick. 1982. Inter-industry technology flows and productivity growth. Review of Economics and Statistics 64 (November): 627–34. ———. 1984. Using linked patent and R&D data to measure interindustry tech-
316
Barbara M. Fraumeni and Sumiye Okubo
nology flows. In R&D, patents, and productivity, ed. Z. Griliches, 417–64. Chicago: University of Chicago Press. Solow, Robert. 1957. Technological progress and the aggregate production function. Review of Economics and Statistics 39 (August): 312–20. Sveikauskas, Leo. 1981. Technology inputs and multifactor productivity growth. Review of Economics and Statistics 63:275–82. Terleckyj, Nestor. 1974. Effects of R&D on the productivity growth of industries: An exploratory study. Washington, DC: National Planning Association.
Comment
Bronwyn H. Hall
The paper describes an effort to create satellite national income accounts for the United States that treat R&D expenditure as an investment rather than simply current spending. Given the well-documented fact that the returns to R&D spending can occur with long and somewhat diffuse lags, making an effort to produce national accounts that incorporate a measure of the national stock of “knowledge” on a somewhat regular basis is long overdue. Although an earlier attempt was made by Carson, Grimm, and Moylan (1994), inclusion of the R&D satellite account in the NIPA is even now not done on a regular basis. Given the discussions and studies that have taken place and are taking place at international organizations such as the Organization for Economic Cooperation and Development (OECD) and the European Union concerning the measurement of intangible assets, as well as the consideration of capitalizing R&D in firm accounts on the part of the Financial Accounting Standards Board (FASB) and International Accounting Standards Board (IASB), this seems a timely exercise.1 After presenting the changes to the System of National Accounts (SNA) from treating both R&D and some government expenditures as investments, the paper goes on to produce some estimates of the contribution of R&D to overall economic growth, based on a set of assumptions about the private and social rate of return to R&D, and the rate of growth of R&D, using a growth-accounting framework. In my discussion I present an overview of the way in which R&D should be incorporated into the national income accounts (NIA) and then discuss some issues related to the measurement of R&D for this purpose. I will not discuss the growthaccounting exercise, because the final version of the paper omits most of
Bronwyn H. Hall is professor of economics at the University of California, Berkeley, and a research associate of the National Bureau of Economic Research. I am grateful to Barbara Fraumeni for clarifying numerous issues for me. All remaining errors are my own. 1. See http://pages.stern.nyu.edu/~blev/ for information on accounting for R&D as a capital asset in firm accounts.
R&D in the National Income and Product Accounts
317
the description of how this was done and the associated numbers on which it was based, with the exception of the R&D numbers. I conclude by discussing whether the time has come for an R&D satellite account. The Measurement of R&D in the NIA In order to understand exactly the changes made to the NIA in order to construct a satellite R&D account, it is helpful to summarize the changes itemized in equations (1)–(16) of the Fraumeni-Okubo paper in a simplified form. The starting point is the accounting identity between the product and income side of the accounts: GDP GDI SD, where SD is the “statistical” discrepancy, which is due to measurement problems, to incomplete data, and to the fact that actual profits net of taxes may not always match the notional income from private capital stock. In turn, this equation can be broken down into its components: C I NE G LI PI IBT SD, where C is consumption, I is investment, NE is net exports, G is government spending, LI is labor income, PI is property income, and IBT is indirect business and nontax liability. LI, IBT, and NE are unaffected by the R&D changes. The adjusted accounting identity between GDP and GDI is therefore the following: CA IA NE GA LI PIA IBT SD. In order to compute this adjusted or satellite account, the necessary changes to C, I, and G are given by CA C (returns to nonprofit R&D) (R&D funded by nonprofits and government/performed by others) (R&D funded by business/performed by nonprofits and government) IA I (R&D performed by business and nonprofits) GA G (returns to government-performed R&D) (R&D performed by government) (R&D funded by government/performed by business) Because PI is the only quantity on the right-hand side of the accounting identity affected by the changes, the changes to PI are simply the sum of the changes to C, I, and G:
318
Barbara M. Fraumeni and Sumiye Okubo
PIA PI (returns to nonprofit R&D and government-performed R&D) (R&D performed by government, business, and nonprofits) (R&D funded by government/performed by government, business, and others) (R&D funded by nonprofits/performed by government, business, and others) (R&D funded by business/performed by nonprofits) or PIA PI (returns to nonprofit R&D and government-performed R&D) (R&D funded by business/performed by business) Table 8.3 of the Fraumeni-Okubo paper summarizes the changes outlined here. In order to implement them, the authors need to have R&D performance broken down into three categories: government, nonprofit institutions, and private business. Unfortunately, this is not the way the data are collected by the National Science Foundation (NSF). Table 8C.1 shows the data as they come from the NSF, using the year 2000 as an example; for comparison, the BEA numbers used by Fraumeni and Okubo in table 8.4 are shown in the last row. According to the NSF, total R&D spending in current dollars during this year was 265 billion (200 in business, 10 in nonprofit, and 55 in government), whereas the BEA numbers yield a total of 270 billion (202 in business, 21 in nonprofit, and 47 in government). The sectoral differences shown in the table are probably due to the amount of extrapolation and estimation the authors were forced to do in order to conform to BEA sector definitions. In particular, the category labeled universities and colleges by NSF is composed of private institutions, which BEA apparently classifies as business, and public institutions, which BEA classifies as government. As described in appendix C, the BEA numbers postTable 8C.1
R&D performing and funding sectors in 2000 (in $millions) Performing sector
Funding sector
Government (incl. U&C)
Industry
Nonprofit institutions
Total 77,792 181,040 5,789 264,621 270,437
Government (incl. U&C) Industry Nonprofit institutions Total
50,585 2,310 2,203 55,098
22,210 177,645 199,855
4,997 1,085 3,586 9,668
BEA numbers
47,445
202,116
20,876
Source: http://www.nsf.gov/sbe/srs/nsf01309/start.htm
R&D in the National Income and Product Accounts
319
1992 were created by regressing their data for 1953 to 1992 on the NSF data in the closest sector and a time trend, and then extrapolating to 2000. This procedure may not be able to reproduce the actual NSF numbers very well, because of the substantial structural break in those numbers between 1992 and 1993 due to revisions. The conclusion is that before the sectoral numbers on R&D spending are actually incorporated into the SNA some additional work needs to be done to understand exactly where the discrepancy arises and to create recommendations to the NSF as to which data are needed in order to create an accurate picture of the contribution of R&D to the economy. Fraumeni’s (2003) presentations to Science, Technology, and Economic Policy (STEP) and Committee on National Statistics (CNSTAT) Workshops on Research and Development Data Needs outlined some of the work needed on R&D data for improved accuracy here and elsewhere before information on R&D could be fully incorporated into the SNA: 1. More timely data: currently, annual revisions are released in July, seven months after the end of the reference year. 2. Quarterly estimates, at least for some indicators. 3. More information to avoid double-counting of capital used by R&D. 4. Historical time series consistency at the level of business, nonprofit, and government spending, to avoid the need for the estimation and extrapolation performed in this paper. 5. Historical estimates for imports and exports of R&D. 6. Developing a complete satellite account requires industry detail, including service industries, as well as basic, applied, and development R&D by industry; domestic and international R&D by industry; and some information on the relationships between R&D performers and funders. As Fraumeni noted in her presentation, access to NSF microdata on the part of BEA might help in some of these areas, and it is hoped that the new data-sharing legislation may facilitate this.2 Measuring R&D Investment Clearly the key component of the R&D satellite account is the measurement of real R&D capital stock. R&D expenditure is composed primarily of the wages and salaries of technical personnel, expenditures on supplies and materials, and investment in equipment. Based on interviews with a 2. The Confidential Information Protection and Statistical Efficiency Act passed by Congress in 2003, which permits the Census Bureau, BEA, and Bureau of Labor Statistics to share business data for statistical purposes only, is often referred to as data-sharing legislation. Shortly after the legislation was passed, the R&D Link Project, a project to link the data from the 1997 and 1999 R&D surveys to BEA’s 1997 Survey of Foreign Direct Investment in the United States and the 1999 U.S. Direct Investment Abroad survey, was developed under the sponsorship of the NSF, in order to obtain a better picture of the import and export of R&D.
320
Barbara M. Fraumeni and Sumiye Okubo
number of technology-intensive global companies, Dougherty and others (2004) present an average breakdown of these expenditures into 46 percent labor, 49 percent supplies and other current expenses, and 5 percent capital expenditure.3 At the present time the investment expenditures are counted in the fixed investment measures, and the remainder is intermediate consumption and the wages paid to labor. By treating the entire quantity as an investment in R&D, the R&D equipment investment is doublecounted. Although this will create some measurement error in the SNA, the error will be quite small as a share of GDP. Given data on R&D capital equipment expenditure, it might be possible to improve the estimates. The fact that roughly half of R&D spending is labor compensation raises a more serious question about the choice of deflator used in constructing the real stocks of R&D capital. Although the appendix to the paper constructs scenarios using a deflator based on the Jaffe-Griliches methodology as well as one based on a computing and software price index from NIPA, the body of the paper uses the NIPA fixed nonresidential investment deflator. This deflator is heavily influenced by the falling real price of computing, which in my view makes it unsuitable as an R&D deflator, given the composition of R&D expenses.4 Not surprisingly, as the appendix tables show, the choice among these deflators makes a considerable difference to the estimates of the contribution of R&D investment or R&D capital to growth. If, as Dougherty and others (2004) suggest, the Jaffe-Griliches deflator is preferred, the contribution of R&D to GDP growth will be on the low side of the estimates presented, about 2 percent for investment and 9 percent for the returns to R&D. Over the 1961–2003 period, the real rate of growth of R&D investment would be about 3.3 percent rather than 5 percent. If the BEA decides to go forward with an R&D satellite account, other issues will arise. The first is that gestation lags may vary a great deal across industries. For example, in the biotechnology industry, R&D projects may last ten years before generating any positive return, whereas in some IT industries, the product life cycle is a year or less. The current assumption of one year for all R&D is therefore a very rough approximation, although the likely impact of changing the lags is small as long as R&D spending is relatively smooth over time. Obsolescence can be a bigger problem for the measurement of private returns than for total returns, because the knowledge created by R&D con3. These estimates are roughly consistent with the much older estimates of R&D composition used by Jaffe (1973) and then by Griliches (1984) to construct an R&D deflator (0.49 labor cost and 0.51 nonfinancial corporation output). 4. During the entire 1961–2003 period, the fixed investment deflator grew by an average annual rate of 2.4 percent, compared to about 4 percent for the Jaffe-Griliches deflator. During the 1992–2003 period, when the difference becomes larger, the corresponding numbers are 2.8 percent and –0.7 percent. The information processing and software deflator itself fell 2.4 percent per year between 1961 and 2003, and almost 5 percent per year during the past ten years.
R&D in the National Income and Product Accounts
321
tinues to be useful even after it no longer generates private returns. The implication is that one needs to be careful in thinking about depreciation when there are spillovers. As a productive input for individual firms, R&D capital may lose its value more quickly than as an input to the economy as a whole. In general, I would prefer to do what the authors do here: use a constant and somewhat conservative depreciation rate for R&D capital that reflects its “expected” contribution, and infer the (possibly fluctuating) private returns from firm profits. That is, private obsolescence will show up as lower net returns to R&D, due to higher actual depreciation. Should we introduce an R&D satellite account? With this paper and the earlier Carson, Grimm, and Moylan (1994) paper in hand, what can we say about the wisdom of introducing such an account into the SNA? Both papers have demonstrated the feasibility of such an undertaking and outlined how it should be done; Fraumeni and Okubo have also given us a sense of what the impact would be in terms of growth accounting. A beneficial side effect of such an undertaking would be to focus attention on the collection by government statistical agencies of improved data on R&D expenditures and their composition.5 But as the authors argue, and as is clear from the range of numbers generated under the different scenarios in appendix A, it would be premature to actually incorporate R&D in the SNA itself, so a satellite account seems to be the way to proceed, at least at first. References Carson, C. S., B. T. Grimm, and C. E. Moylan. 1994. A satellite account for research and development. Survey of Current Business 74 (11): 37–71. Dougherty, S. M., R. Inklaar, R. H. McGuckin, and B. Van Ark. 2004. Internationally comparable science, technology, and competitiveness indicators. Report to the National Science Foundation. New York: The Conference Board. Fraumeni, B. M. 2003. CNSTAT R&D panel presentation. Paper presented at CNSTAT R&D Panel Meeting. 23 July, Washington, DC. Available at http:// www7.nationalacademies.org/cnstat/Barbara_M_Fraumeni.ppt Griliches, Z., ed. 1984. R&D, patents, and productivity. Cambridge, MA: University of Chicago Press. Jaffe, S. 1973. A price index for deflation of academic R&D expenditures. NSF Working Paper no. 72-310. Washington, DC: National Science Foundation. National Research Council. 2004. Measuring research and development expenditures in the U.S. economy: Interim report. Panel on Research and Development Statistics at the National Science Foundation, Committee on National Statistics, Division of Behavioral and Social Sciences and Education. Lawrence D. Brown, Thomas J. Plewes, and Marisa A. Gerstein, editors. Washington, DC: The National Academies Press.
5. On this issue, see National Research Council (2004).
9 Communications Equipment What Has Happened to Prices? Mark Doms
9.1 Introduction Measuring prices for goods that exhibit rapid rates of technological change raises special challenges. Over the past several decades, new techniques and data sources have been developed in response to these challenges. For instance, an extraordinary amount of attention has been placed on accurately measuring computer prices, and the results of these efforts have been incorporated into the national income and product accounts (NIPAs). Attention was focused on computers because price declines for computers were easily observed and computers were a growing share of total investment. One area where there has been rapid technological change but relatively little effort in terms of measuring prices is communications equipment. This is surprising since breakthroughs in communications technology have been a necessary ingredient in constructing the “new economy,” as witnessed by the widespread use and growth of the Internet, the creation of office networks, the ability to transmit huge amounts of information over a global fiber optic network, the increased use of cell phones, the increased capacity of cable television networks, and so on. Despite the breathtaking advances in communications equipment, the Mark Doms is senior economist in Applied Microeconomic and Regional Research at the Federal Reserve Bank of San Francisco. The author thanks Neal Dunnay of Kinder Morgan Incorporated (KMI) and Brian Van Steen of RHK for providing data and technical guidance and suggestions. Andrew Batchelor, Niels Burmester, Jon Eller, Susan Polatz, Brigitte van Beuzekom, Courtney Coker, and Margaret Macleod provided research assistance. The author also thanks Carol Corrado, Dan Sichel, Ken Flamm, Frederick Furlong, Matt Shapiro, and an anonymous reviewer for input and advice, and Christopher Greene for editorial comments.
323
324
Mark Doms
official price indexes for communications equipment from the Bureau of Labor Statistics (BLS) barely changed over the 1990s. According to the BLS, producer prices for communications equipment increased an average of 0.2 percent between 1991 and 2000.1 The prices for communications equipment stand in stark contrast to those for computers, where BLS shows prices falling an average of 14.5 percent over this period. The Bureau of Economic Analysis (BEA) measure for computers shows prices falling 17.6 percent.2 There are several reasons why research has not progressed more quickly in the field of communication equipment prices. First, communications equipment covers a more diverse set of products than computers. For instance, communications equipment covers, among other things, cell phones, alarm systems, fiber optic gear, and local area network (LAN) equipment. Second, obtaining data on communication equipment prices is more difficult than it is for computers. Large chunks of communications equipment are sold to relatively few customers (for instance, telecom service providers), and prices are not regularly published in periodicals such as Computer Shopper. Table 9.1 shows some measures of both computer and communications equipment investment in the 1990s. Line 3 shows that the growth rates in nominal investment for computers and communications equipment were remarkably similar during the 1990s. Additionally, there is also a strong resemblance in the level of investment spending in computer and communications equipment. These facts make it all the more surprising that more research has not been conducted on the prices of communications equipment. As mentioned before, the biggest difference between the official investment statistics on communications equipment and computers is what has happened to official prices. As shown on line 7 of table 9.1, BEA’s official prices for computers fell 17.6 percent on average during the 1990s, whereas communications equipment prices fell an average of 2.1 percent. One reason why the BEA’s official prices for communications equipment falls faster than the BLS PPI is that the BEA has incorporated alternative price measures for two communications equipment components. The first is for central office switching equipment, where Grimm (1996) found that prices fell an average of 9.1 percent per year. The second is for LAN equipment, where Chris Forman and I (Doms and Forman 2003) found that prices fell an average of 17.5 percent per year between 1995 and 2000.3 1. The producer price index (PPI) for industry 366 increased at an annual rate of 1.1 percent between 1990 and 1997. Between 1997 and 2000, the PPI fell at an annual rate of 1.2 percent. 2. The BEA measure for computers used to rely on its own measure of prices for larger computers, and the BEA price series fell faster than the comparable BLS series. Consequently, the BEA computer deflator falls faster than BLS’s. 3. See Doms and Forman (2003).
Communications Equipment: What Has Happened to Prices? Table 9.1
325
Business investment in computers and communications equipment
Average nominal growth in investment (%) 1. 1990–95 2. 1995–2000 3. 1990–2000 Level of investment ($ billions) 4. 2000 Average annual price change (%) 5. 1990–95 6. 1995–2000 7. 1990–2000 Average real growth rates (%) 8. 1990–95 9. 1995–2000 10. 1990–2000
Computers
Communications equipment
5.6 10.6 8.0
5.0 10.5 7.6
109.3
116.8
–12.6 –21.6 –17.6
–1.1 –2.9 –2.1
20.9 41.0 31.1
6.2 13.8 9.9
Why do I think a priori that communications equipment prices fell faster than the BLS data would lead us to believe? First, the small handful of studies that have examined portions of communications equipment have found that prices have fallen quickly, but perhaps not as much as computers. Second, as stated earlier, there are numerous instances of technological change in several parts of the communications equipment spectrum that have revolutionized communication services. Given the large expenditures made on communications equipment, deriving more accurate measures of prices would likely have consequences for several top-line measures of economic activity. For instance, Jorgenson and Stiroh (2000) run simulations under different scenarios for communications equipment prices. To illustrate the potential importance of deriving more accurate measures of communications equipment, table 9.2 shows the contributions of computers and communication equipment to gross domestic product (GDP) and equipment and software (E&S) growth during the 1990s.4 Line 2 shows that computer investment contributed an average of almost 0.3 percentage point to the average annual growth rate of GDP during the last decade, with a larger contribution in the second half of the 1990s. In contrast, communications equipment investment contributed only about a third as much as computers did. A similar pattern holds for E&S spending. Given the similar nominal shares and growth rates of computers and communications equipment, it seems that the potential payoff of “fixing” 4. The communication equipment GDP calculations were performed by using E&S communication equipment spending and estimates of government and consumer spending. The government and consumer spending estimates were based on the 1992 input-output tables, the last official estimates. The data were further adjusted for imports and exports.
326 Table 9.2
Mark Doms GDP and equipment and software (E&S) growth: Contributions of computers and communications equipment (%) Average annual growth rate
GDP 1. Growth 2. Contribution of computers 3. Contribution of communications equipment E&S 4. Growth 5. Contribution of computers 6. Contribution of communications equipment
1990–2000
1990–95
1996–2000
3.08 0.29 0.08
2.22 0.19 0.04
4.13 0.38 0.12
9.36 2.40 0.24
7.15 1.94 –0.22
11.85 2.98 0.77
communications equipment prices relative to the cost of doing so may be high. The question then arises: how can we accurately measure prices of communications equipment? This paper pursues the following, albeit imperfect, approach. First, I map out the different types of communications equipment and how spending on these different types have changed over the years. The national income and product accounts (NIPAs) data for communications equipment is very aggregated, so information from other sources on the types of communications equipment purchased is used. These data help identify those areas within the communications equipment sector where spending is large and where it has accelerated. This analysis is in the second section of the paper. The third section presents information from past studies on communications equipment pricing. The fourth and fifth sections of the paper can be viewed as continuations of the “house-to-house combat” to improve price measures used in the NIPAs (see Shapiro and Wilcox 1996). Section 9.4 presents new results on the prices of modems and public telephone exchanges. The fifth section delves into fiber optic equipment, the area where technological change has been extremely rapid and spending accelerated sharply in the late 1990s and 2000. In the sixth section, the information developed in this paper is combined with the results from elsewhere to come up with several overall price indexes for communications equipment. Finally, the seventh section addresses how these alternative price indexes would affect growth rates in various investment categories and how calculations on capital deepening and multifactor productivity (MFP) would change. 9.2 What Is Communications Equipment? Communications equipment comprises most of the equipment that sends and receives information and the vast array of types of equipment
Communications Equipment: What Has Happened to Prices?
327
that lies between the sender and receiver. Over time, the types of equipment have multiplied—instead of copper wires connecting homes to central switching offices, equipment now exists that sends information over fiber optic networks, satellites, and cell phone towers, in addition to the equipment that makes computer networks run. How to classify all of this equipment is therefore a daunting task. The Standard Industrial Classification (SIC) system and the more recent North American Industrial Classification System (NAICS) break down communications equipment into several subcategories: telephone apparatus manufacturing, radio and television broadcasting and wireless communications equipment manufacturing, and other communications equipment manufacturing. Although these categories are useful, over the years there has been movement away from the traditional modes of communication that these systems were based on (namely, the land-line telephone system). Indeed, the industry seems to assign communications equipment into much different categories. To help explain what constitutes communications equipment and to help us think about how the types of equipment should be classified, figure 9.1 presents a simplified diagram of communications networks. The way to view figure 9.1 is to be aware that information is sent from the left-hand side of the diagram to the right-hand side. The items listed on the
Fig. 9.1
A simplified version of voice and data communication networks
Notes: Equipment in switching centers contains gear that accepts incoming information from many different languages, protocols, and mediums and then decides where to send the information next. This gear includes voice switches and fiber optic equipment. LAN local area network; WAN wide area network; PDA personal digital assistant; PBX public branch exchange; KTS keystone telephone system.
328
Mark Doms
left are those pieces of equipment that initially send out a message, and the pieces of equipment listed on the far right are those that receive the message. The items in the middle are the necessary equipment a message must traverse to get from the sender to the receiver. The diagram focuses on voice and data networks and omits radio, TV, alarm systems, walkie-talkies, and defense-specific communications equipment. With that said, the equipment shown in figure 9.1 makes up the bulk of communications equipment spending. The elements on the left-hand side are broken into voice and data. The split between voice and data transmission has arisen because a premium has been placed on the quality of the voice network. When a voice message is transmitted, it is essential that each part of that message arrive at its final destination in the correct order and in a timely manner. Therefore, the resources devoted to sending a voice message are greater than the resources required for sending a comparably sized data transmission.5 On the left side of figure 9.1, there are three elements under the voice heading and just one element (computer) under the data heading. Each of these systems has evolved somewhat separately, and each has its own protocols and formats. However, these distinctions are blurring. For instance, cell phones are being used to transfer data (such as Verizon’s Express Network), phone calls can be made over computer networks, and some personal digital assistants have the ability to communicate with networks. With that said, the connections shown in figure 9.1 represent the routes that a majority of voice and data traffic travels, especially for the time period this paper examines. Once the information leaves the original sending piece of equipment (phone, cell phone, business phone, or computer), the information is usually sent to a location where information from several sources congregates before being sent on the next leg of its journey. This system is akin to the hub-and-spoke system of the airlines. The hub-and-spoke system is used because it is not economical to have direct connections between everyone on a network to everyone else on a network. Therefore, a good chunk of communications equipment spending is on equipment that takes in many signals and makes the decision where to send the signals to next. Over time, the equipment that performs this routing function has become ever more complex and able to handle greater volumes of information. Examples of this type of equipment are listed in the middle of figure 9.1. The upper left portion of figure 9.1 shows that most home phones are con-
5. However, as technology improves, the distinction between voice and data transmission will all but disappear. Already, voice-over Internet protocol (VOIP) applications are arising. In the internet protocol (IP) system, a message is broken into packets, and those packets are then routed to their final destination, although not all of the packets will take the same route.
Communications Equipment: What Has Happened to Prices?
329
nected to a local telephone-switching center, usually by copper wire.6 Cell phones instead send a signal over the airwaves to a base station. The signals in the base station are then sent to a switching center, joining calls made from residences. Phones in businesses, government buildings, and academic institutions are often connected to their own phone network. The two dominant types of equipment that run these networks are called public branch exchange (PBX) and keystone telephone systems (KTS). Usually, PBXs and KTSs are then connected to switching centers. In figure 9.1, the fourth device used to initially transmit information is the computer. Actually, the parts of the computer that are officially considered communications equipment are modems and LAN cards (the most common of which are Ethernet cards). A computer can be linked to larger networks in one of three ways. First, with a traditional analog modem or a digital subscriber line (DSL), a computer can use a copper phone line to connect to a switching center. Second, computers can use cable TV lines. These first two mediums are popular for home computer use. Many business, government, and academic computers are instead linked to a LAN. A LAN can then be linked to other LANs, to switching centers (say, via a T1 line), or to wide area networks (WANs). Switching centers contain various types of equipment, including telephone switches, satellite dishes, and the gear that transmits and receives information for fiber optic networks. A message sent from the right side of figure 9.1 to the left side may go through many such switch centers. For data communication, a message is broken into packets, and those packets can each take different routes to get to the final destination. 9.2.1 Expenditures Figure 9.1 shows the basic components of the modern communications system. Using data from a very wide variety of sources, table 9.3 shows estimates of how much was spent on various pieces of equipment.7 The data in table 9.3 come from a very wide array of sources, mainly from private research firms and trade associations. For instance, the data on expenditures on LAN equipment came from Gartner/Dataquest, the data on expenditures for fiber optic equipment came from two firms that follow the industry—and are described in more detail later in the paper—and data on many of the smaller categories came from various issues of the Multimedia Telecommunications Market Review and Forecast by the Telecommunications Industry Association. The data in table 9.3 are only from 1997 to 2000 because those are the years for which I could obtain estimates for all major 6. Switching centers are also referred to as central offices, local central office, exchange, and local exchanges. Also, a home phone may transmit and receive information over its cable TV line. 7. The largest omitted categories in table 9.3 are likely radio equipment (outside of cellular phone equipment), broadcast and studio equipment, and alarm systems.
330 Table 9.3
Mark Doms Spending on communications equipment ($ millions) 1997
1998
1999
2000
AAGR (%)
Switching center equipment (1) Central office switching and transmission equipment Fiber optic equipment Cellular phone infrastructure
27,055
29,577
36,870
46,548
19.8
12,358 7,160 5,678
13,562 9,059 4,474
14,475 14,081 4,955
15,500 22,061 4,734
7.8 45.5 –5.9
Data communication (2) LAN Equipment Routers LAN switches Hubs LAN cards Modems Other (WAN, ISDN, ATM, frame relay, etc.)
24,134 11,437 2,986 3,787 1,884 2,780 3,077 9,620
27,745 13,111 3,425 5,706 1,538 2,442 3,290 11,344
30,306 14,027 4,525 6,528 761 2,213 2,800 13,479
33,870 15,838 5,946 7,645 419 1,829 2,500 15,532
12.0 11.5 25.8 26.4 –39.4 –13.0 –6.7 17.3
Voice equipment (3) Business phone equipment Private branch exchange (PBX)/Key telephone system (KTS) Voice processing equipment (call centers, voice mail, etc.) Wireless handsets (cell phones) Consumer products Cordless telephones Answering machines Home fax machines Corded telephones
22,855 11,795
25,172 13,227
26,424 14,195
27,437 15,135
6.3 8.7
6,832
7,601
7,980
8,392
7.1
4,963 5,787 5,273 2,099 1,173 1,367 634
5,626 7,228 4,717 2,250 1,240 655 572
6,215 7,619 4,610 2,240 1,215 625 530
6,743 7,692 4,610 2,260 1,210 620 520
10.8 10.0 –4.4 2.5 1.0 –23.2 –6.4
3,400
3,850
5,400
6,000
20.8
3,400
3,850
5,400
6,000
20.8
77,444 8,167 69,277 73,739
86,344 8,331 78,013 81,236
99,000 8,420 90,581 93,350
113,855 8,456 105,399 116,837
13.7 1.2 15.0 16.6
Other (4) Capital equipment spending by cable television firms not covered above Total spending (1) (2) (3) (4) Consumer products Total spending – Consumer products E&S communications equipment
Sources: Dataquest, FCC, TIA, RHK, KMI, and author’s estimates. Note: AAGR average annual growth rate; E&S equipment and software.
categories. In some of the results that follow, results for individual pieces of equipment go back until the late 1980s, and there are some results for 2001. Most categories experienced growth in nominal spending in the late 1990s. In particular, there was a tremendous surge in spending on fiber optic equipment (the fiber itself is relatively cheap and not considered communications equipment), reflecting a large build-out of the several large new fiber networks. Fiber optic networks and fiber optic equipment are discussed more fully in section 9.5. Table 9.3 also shows where there were large expenditures made on com-
Communications Equipment: What Has Happened to Prices?
331
munications equipment. The largest single major category is switchingcenter equipment. And within that category, fiber optic equipment made up the bulk of the expenditures in 2000 (in 1997, the share of central office switching equipment was only half as large). The bottom of table 9.3 shows the total expenditures in the displayed categories in addition to communications equipment investment as reported in the NIPAs. The sums of the categories in table 9.3 are less than the NIPAs because of omissions. However, the sums of the categories have similar growth rates to the NIPA data between 1997 and 2000, and both series show an especially large increase in 2000. 9.3 Previous Research on Prices of Communications Equipment There has already been research on prices for some of the categories shown in table 9.3. In particular, research has been done on cell phones, central office telephone switches, and LAN equipment. Together, these three categories constitute 42 percent of the communications equipment expenditures in 1997 and 36 percent in 2000. Because one objective of this paper is to devise a best guess for an overall price index, I’ll discuss the previous research with an emphasis on what the driving forces were that lowered prices. 9.3.1 Cell Phones Jerry Hausman (1999) has written a series of papers on the price of cellular telephone service. He cites that when cell phones were first introduced in 1983, they sold for about $3,000. By 1997, they sold for about $200. These are very rough numbers, and it is certain that the average cellular phone in 1997 had a better feature set than an average phone in 1983. With that said, using the Hausman quotes, prices for cellular phones fell an average of 17.0 percent per year between 1983 and 1997. One of the primary reasons for the drop in the price of cell phones has been the advances made in semiconductors. Cell phones are quite complex devices, and it has been said that if a cell phone were made with vacuum tubes, the cell phone would have to be the size of the Washington Monument. An additional factor that may help explain the drop in cell phone prices is economies of scale in production of the phones and its components. In 2000, there were an estimated 400 million cell phones sold worldwide. 9.3.2 Telephone Switches Kenneth Flamm (1989) and Bruce Grimm (1996) have examined prices of telephone switches. Telephone switches are pieces of equipment usually installed in switching centers that route calls from one center to another and then to the final destination. The first telephone switches were electromechanical; these switches physically moved to complete a circuit over which a conversation could be held. In the early 1980s, digital electronic
332
Mark Doms
telephone switches performed the same function but without moving parts. The new switches could perform other functions as well. Grimm used data from the Federal Communications Commission (FCC) to run a hedonic regression where the price of the switch was regressed on a number of explanatory variables. He found that between 1985 and 1996, prices fell an average of 9.1 percent per year. However, in the later years of his study, he found that prices fell an average of 16.1 percent. 9.3.3 LAN Equipment Doms and Forman (2003) examined the prices of different types of LAN equipment in the second half of the 1990s. Local area network equipment is the equipment that sends and routes information over computer networks. The development of this equipment has made accessing the Internet and e-mailing more affordable. Within the LAN equipment aggregate, there are four primary components: LAN cards (the card in a computer that sends and receives information on a computer network), switches (devices that perform simple routing functions), routers (devices that often sit atop computer networks and determine where information packets should be sent), and hubs (pieces of equipment that act as traffic cops in merging information onto a computer network). Doms and Forman found that prices for LAN equipment fell far faster than the BLS comparable PPI would suggest—between 1995 and 2000, prices for LAN equipment fell at a 17.5 percent annual rate, compared with the comparable PPI, which fell only 0.1 percent.8 9.3.4 Indirect Evidence The aforementioned studies found that prices for various types of communications equipment fell, and fell considerably faster over the time periods examined in contrast to the most comparable PPIs. However, making an inference to total communications equipment from these groups is not prudent, as these groups make up just 36 percent of communications equipment spending in 2000. Also, there is a question of whether these studies represent a random sample of communications equipment. They probably do not. 8. Why did prices of LAN equipment fall so quickly? The reasons are interrelated and speculative, and they include the degree of competition, simplicity of the product, the size of the market, and how many of the devices are made. In terms of competition, prices fell fastest for home and small office routers, the segment of the router market with the most competition (this is the segment where Cisco, the dominant maker of routers, has the smallest market share). One reason for the high level of competition is that designing and producing low-end routers is relatively easy, so that many firms can enter. Also, the quantity of low-end routers allows producers to enjoy economies of scale in production of the routers themselves and in the production of the semiconductors used in the routers. Because of steep competition, prices for switches also fell quickly. Again, although Cisco dominates the market, it did not do so initially.
Communications Equipment: What Has Happened to Prices?
333
Other indirect evidence that might also shed light on technological change for communications equipment includes a measure of the rate of innovation and the prices for semiconductors used in communications equipment. Devising metrics that somehow capture the rate of technological progress is difficult. Two commonly used measures are patents and research and development (R&D) spending. Unfortunately, R&D spending figures by industry are not published at a fine enough level of detail (communications equipment, SIC industry 366, is sometimes lumped in with other industries in industry 36, such as semiconductors [SIC industry 367]). However, Manuel Trajtenberg (1989) of Tel-Aviv University maintains a database of patents, and those patents are assigned to classes. Figure 9.2 shows the percent of total patents that are awarded to three high-technology fields—computers, communications equipment, and semiconductors—between 1963 and 1999. The share of patents in the category of computers and peripheral equipment has been increasing much faster than that of the other two categories. Also, the share of total patents going to communications equipment has been increasing. One wild card is how the patents for semiconductors are related to chips that are used in computers versus chips that are used in communications equipment. Aizcorbe, Flamm, and Khurshid (2002) examined the prices of semiconductors by final use category, examining the semiconductor price indexes for semiconductors that go into computers, communications equipment,
Fig. 9.2
Share of U.S. patents by high-technology category
Source: Manuel Trajtenberg. Note: Computers and peripherals computer hardware and software computer peripherals information storage.
334
Mark Doms
and so on. They find that the content of semiconductors in communications equipment is about 11 percent. Further, they estimate that in the late 1990s, prices for chips that went into communications equipment fell, on average, about 30 percent per year. So far I have discussed previous studies on communications equipment prices and some other information that suggests that prices for communications equipment should perhaps be falling faster than the official statistics indicate, but perhaps not as fast as the official numbers for computers. The next two sections present new results on three types of communications equipment to better round out what is actually known about prices. 9.4 Modems and PBX/KTS Measuring prices for high-tech goods has received a voluminous amount of attention in the past two decades. During this time, several techniques have been employed. Two of the more popular methods are hedonics and match-models. In the results presented below and in the results presented in the next section on prices for fiber optic equipment, matchedmodel techniques are used. One reason for using matched-model techniques is because the available data are best suited for matched models. For many of the products, I do not have access to all of the performance characteristics to estimate a hedonic model, and I am also constrained by the number of observations. However, as Aizcorbe, Corrado, and Doms (2003) demonstrate, the matched-model approach yields numerically similar results to the hedonic approach when net entry of new products is not large. In most of the results presented in this section and the next, this is indeed the case. One, albeit small, segment of communications equipment where price information is available for a large number of products is for modems. This first part of this section analyzes modem prices. The second part examines prices for PBX/KTS equipment, the internal phone networks used by many businesses. 9.4.1 Modems Although modems are a small part of communications equipment spending, prices for analog modems are relatively easy to obtain because they are sold in the same outlets as personal computers. Examining modems is of interest because there has been tremendous technological change over the past decade. For instance, in late 1991, 9600-baud analog modems were introduced, followed by 14400-baud modems in 1992Q3 and 28800baud modems in 1994Q4.9 The rate of increase in speed during this period 9. A 9600-baud modem is not faster than a 1200-baud modem—the 9600 baud modem has higher capacity. The speed of the signals is dictated by the speed of electricity over copper wires, whereas the signal can be modified and refined to have higher capacity. Although en-
Communications Equipment: What Has Happened to Prices?
335
averaged about 63 percent, a bit less than Moore’s law. One might suppose that modems are one type of equipment where prices fell faster than other types of communications equipment because of high competition. Also, it is possible that since most personal computers (PCs) come equipped with modems, firms have been able to achieve higher economies of scale than other segments of the communications equipment industry (modems are similar to LAN cards in many ways). Our study on modems has two parts. The first part analyzes prices for analog (dial-up) modems for PCs from 1989 to 1998. The second part looks at cable modems from late 1999 to late 2001. There are three modem groups that we do not examine: PCMCIA modems (modems commonly used for laptop computers), modems used for business applications, and wireless modems.10 Analog Modems We gathered information on 681 modems from PC World magazine between 1989 and 1998. Finding advertised list prices for modems became more difficult after 1998 as modems were increasingly offered as standard equipment in computers and the market for stand-alone modems shrank. For consistency, we obtained prices in ads from Computer Direct Warehouse and Arlington Computer Products, two large retailers of PCs and components that had placed ads in each issue of PC World that we examined. Manufacturers of the data collected were U.S. Robotics, Hayes, and Practical Peripherals. Our database contains the listed retail price and the speed for each modem. We matched unique modem models across consecutive quarters. On average, each modem was in our sample for seven quarters. We calculated the geometric mean of prices in two consecutive periods.11 Also, we calculated separate price indexes for different classes of modems. For all of the price indexes shown, we required a minimum of six gineers cringe when “higher speed” is used to describe various forms of bandwidth, it is likely that they have lost the semantic battle. 10. This last category includes wireless network cards. Wireless networks started to gain widespread popularity in 2001. 11. Standard hedonic regressions could not be run because the advertisements did not provide a complete list of the characteristics of the modems. The advertisements always mentioned the speed and the name of the modem but did not mention any of the other characteristics. For instance, modems vary by the software that is included (such as software that allows the user to use the modem as a fax machine), warranty, speakerphone, mike, caller ID, free tech support, and so on. One alternative would have been to run a fixed-effect hedonic regression, as shown in Aizcorbe, Corrado, and Doms (2003). They show that matched models and fixed-effect hedonic regressions yield similar results when entry and exit are relatively small. A potential problem with using geometric means is that each observation is treated equally. If the revenue shares were known for each modem, then we would use that information to construct a matched-model superlative index. If there is a correlation between prices changes and the size of the revenue shares, then the geo-means approach yields biased results. For instance, substitution bias can occur if revenue moves toward those modems when prices are falling relatively rapidly.
336
Mark Doms
observations. The overall analog modem price index is based on an average of about 100 observations. Our results are shown in figure 9.3. For comparison, we also place the BEA computer price index on this figure. Our overall modem index falls an average of 15.6 percent over the sample period, compared with the 16.3 percent drop in the BEA computer deflator. Initially, our price index falls more slowly than the BEA computer deflator, when 1200- and 2400-baud modems were commonplace and the 9600-baud modems had not yet been introduced. However, with the advent of 9600- and 14400-baud modems, prices fell more quickly, especially in the case of the 14400-baud modems. Prices continued to fall, although the rate of decline does vary from period to period. Of interest is that prices of the 56K modems increased during their first year in our sample and then quickly fell. Figure 9.4 highlights modems that are over 10K baud. We computed an over-10K-baud index and compared it with a BLS modem index for this category. Our index fell significantly faster than the BLS index: between 1994Q2 and 1998Q3, our index fell at an average rate of 15.4 percent, compared with 8.7 percent for BLS. Cable Modems In the past several years, adoption of modems for broadband access (DSL and cable) has increased markedly. I was able to obtain industry-level estimates on revenue and quantities for cable modems between 1999Q4 and 2001Q1. The data on revenues and quantities come from Gartner, a private research firm that tracks a number of high-technology industries. According to Gartner, most cable modems are similar in that they must meet the Data Over Cable Service Interface Specification (DOCSIS) standard, the standard that defines “the interface requirements for cable modems involved in high-speed data distribution over cable television system networks” (CableLabs website, www.cablemodem.com). The characteristics of cable modems over this period are fairly uniform; that is, a cable modem in 1999 was not that much different from a cable modem in 2001. Using the Gartner data, the average price of a cable modem fell from $228 in 1999Q4 to $145 in 2001Q1, an average price decline of 30.4 percent. The results for the cable modems are shown in the upper left of figure 9.3.12 9.4.2 PBX/KTS My attention now turns to the prices of the phone systems located in many businesses, government offices, and other sites. One reason to examine this segment is that sales of these systems exceeded $8 billion in 2000. 12. Cable modems are now being increasingly sold in retail stores. Recent prices (March 2002) for DOCSIS cable modems are about $100, a decline of about 30 percent from the yearago Gartner prices.
Price indexes for modems
Source: Analog modem data were collected from CDW and ACP advertisements in PC World magazine beginning February, 1989 (Q1). The 681 observations were generally U.S. Robotics, Hayes, and Practical Peripheral models. Cable modem figures (1999Q4 to 2001Q4) were derived from market statistics produced by Gartner, Inc. Computer and communications equipment price data are from BEA.
Fig. 9.3
Price indexes for modems over 10K bps
Source: Analog modem data were collected from CDW and ACP advertisements in PC World magazine beginning February, 1989 (Q1). The 681 observations were generally U.S. Robotics, Hayes, and Practical Peripheral models. Cable modem figures (1999Q4 to 2001Q4) were derived from market statistics produced by Gartner, Inc. Computer and communications equipment price data are from BEA.
Fig. 9.4
Communications Equipment: What Has Happened to Prices?
339
These telephone systems allow users to call one another without using central switching centers. These telephone systems are smart enough to know that when, for instance, you dial a four-digit code, the call you are placing is to another phone within the system. Sometimes on these systems, you have to dial a “9” to get an outside line—that is, a line that is most likely connected to a switching center. These systems fall into two categories: PBX KTS. A PBX is a bit more sophisticated than a KTS in that the number of phone lines entering a location is less than the number of phones in that location. Both PBX and KTS systems have many features, and the feature set has grown over time. Perhaps this isn’t surprising. Newer features include call forwarding, call waiting, caller identification (ID), plug-in capability to a T1 line, message centers, and so on.13 According to an industry contact, the prices of additional features have been falling over time. However, this contact also said that getting price data would be difficult because prices are usually quoted only after a request for a specific system with specific features has been placed. I was able to get some data that can produce a bound for prices. Several private firms track the industry and classify PBX/KTS systems by how many lines they have. These firms have revenue estimates for “basic” systems. What exactly constitutes a “basic” system has changed over time, with a “basic” system that is sold today having more features than a “basic” system of five years ago. The data on revenue and on average prices are presented in table 9.4 and come from Gartner/Dataquest; they appear as they are published. That is, there is no underlying detail that is readily available that is not shown in the table. As shown at the bottom of the table, prices fell an average of 5 percent between 1994 and 2000. Again, I believe these estimates, based on the Gartner data, to be upper bounds—prices likely fell even faster because the basic configurations improved over time, and my estimates do not pick this up. Just how much the estimates are biased is uncertain. 9.5 Fiber Optics The area within communications equipment where the most rapid technological innovations have occurred within the past several decades is fiber optics. Instead of using electrical signals to carry messages over copper wires, there has been an increasing move toward using pulses of light over thin fibers of pure glass. Although there have been improvements in the 13. There is a trend toward converting traditional circuit-based PBXs to IP technology. A UNIX or NT server would run the phone system instead of a PBX. The phone system would become a computer network, and each phone would have an IP address. Although this technology is gaining popularity, sales of IP-based phones were reported to be small during the sample period considered in this paper.
340 Table 9.4
Mark Doms Average prices and quantities for private branch exchange systems, by number of lines Number of lines
1994 Price Quantity Revenue 1995 Price Quantity Revenue 1996 Price Quantity Revenue 1997 Price Quantity Revenue 1998 Price Quantity Revenue 1999 Price Quantity Revenue 2000 Price Quantity Revenue
1 to 8
9 to 24
24 to 48
49 to 100
101 to 400
401 to 1,000
Price index
2,701 190,000 513
6,856 99,200 680
17,226 50,400 868
47,806 27,400 1,310
170,511 5,400 921
589,428 1,100 648
1.00
2,851 199,200 568
7,231 100,600 727
18,465 47,200 872
50,057 24,600 1,231
169,567 6,300 1,068
567,416 1,200 681
1.03
2,374 187,814 446
6,819 117,311 800
18,406 46,759 861
44,295 26,863 1,190
166,433 6,982 1,162
530,018 1,298 688
0.96
1,900 283,544 539
6,953 122,099 849
17,851 50,570 903
43,054 34,798 1,498
148,222 8,366 1,240
498,172 1,553 774
0.90
1,732 289,765 502
7,253 113,446 823
17,799 52,744 939
40,319 39,974 1,612
139,287 10,696 1,490
435,616 1,769 771
0.86
1,733 233,233 404
6,622 124,369 824
16,138 63,167 1,019
38,462 44,283 1,703
122,320 13,898 1,700
413,277 1,842 761
0.79
1,996 228,883 457
6,424 96,716 621
15,550 57,085 888
36,393 36,215 1,318
103,316 15,734 1,626
408,678 1,692 691
0.75
AAGR (%)
–4.8
Sources: U.S. Premise Switching Systems Market Share and Forecast, 2000; Market Statistics, Gartner Group, Inc. May 8, 2000, U.S. Premise Switching Systems Market Share and Forecast, 1999; Market Statistics, Gartner Group, Inc. April 12, 1999, and U.S. PBX/KTS and PCX Market Share and Forecast, 2001; Market Statistics Gartner Group, Inc. May 31, 2001. Note: Revenue is in millions of dollars.
glass, the most significant innovations have come in the equipment used to transmit and receive the light impulses. This section provides a brief overview of fiber optics and then presents some information on what has happened to the prices of the equipment used in fiber optic networks. 9.5.1 Overview of Fiber Optics The use of light in communications has existed for some time, including the use of smoke signals, “one if by land, two if by sea,” semaphore, and other visual means when there was direct line of sight. In a development
Communications Equipment: What Has Happened to Prices?
341
that portended great things to come, Alexander Graham Bell invented the photophone. The photophone is basically a mirror that aims a beam of light to a receiver. The source of the light is the sun. The photophone has a device that vibrates a mirror as someone speaks. At the receiving end, a detector picks up the vibrations in the beam of light and converts the vibrations back into voice (analog technology). The sun is not a reliable light source, and the photophone now languishes on the shelves of the Smithsonian Institution. However, using light to carry information has proven to be as revolutionary as the phone itself. The next big extension of the Bell idea was to transmit light over a medium that didn’t have to go into a straight line and didn’t require the cooperation of the sun. Between 1850 and 1960, a series of scientific discoveries led to the use of glass fibers, sheathed in various materials, as a medium through which to transmit light. In 1960, the laser was invented. Lasers are able to focus large amounts of light into very tight streams, making them ideal for sending light down a thin glass fiber strand. Refinements in lasers and fibers continued through the 1960s and 1970s, and in 1977 the first field trials were conducted that used fiber optic cables to transmit voice calls in Chicago. There has been near-continuous improvement since, especially in terms of the amount of information that can be transmitted by a beam of light and the number of light waves that can be simultaneously sent down a piece fiber optic cable. Fiber optic networks are complex and require many different types of equipment. The basic components of a fiber optic network are the fiber through which the light pulses are sent, a transmitter, a receiver, and a regenerator. As I mentioned before, the fiber itself is not considered communications equipment. A transmitter is a device that takes a signal (perhaps an electrical signal), translates that signal into light pulses, and then sends those light impulses into a piece of glass fiber. Lasers are often used to send the light. Because the capacity of fiber is very high, the transmitter is able to take in many signals simultaneously and translate them all into a single wavelength of light. Taking many signals and merging them into a single data stream is called multiplexing.14 The devices that take in many tributaries to produce one stream of information are called multiplexers.15 At the end of the fiber, there is a receiver, called a demultiplexer.16 The demultiplexer receives the light signal, converts it into electrical signals, 14. “Multiplexing” is used in a somewhat similar way in juggling. 15. At a point in the process, electrical signals have to be converted to light, and light converted back to electrical signals. The equipment that does this—often the multiplexers—is very expensive. Over the past several years, there has been research on “all-optical” networks, networks that would not need the expensive conversion equipment. 16. Once the pulses of light go merrily down the fiber strand, they begin to disperse somewhat, losing some of the tight form they had initially. This dispersion is called attenuation. At certain intervals, a fiber network may have a “regenerator,” a device that reads in the signal, cleans it up, and sends it out again. I was not able to find any information about regenerators, either in terms of expenditures or in terms of prices and characteristics.
342
Mark Doms
and then sends them out to multiple conduits, the reverse of what happened at the beginning. That’s a simplified version of the basics. In 1996 the basics got a bit more complicated—and exciting. Instead of using one laser to shoot light down a strand of fiber, why not use two or more that operate at different wavelengths and shoot these beams down the fiber simultaneously? One example of the logic behind why this technology works is that when you are in a roomful of people talking, you can sometimes concentrate and just listen to one voice. Your mind acts as a filter and blocks out the other voices. If too many people are talking, and the voices are similar, then hearing a single voice is difficult. Just as the voices have to be different, the wavelengths of the lasers also have to be different. An amazing consequence of this technology is that the capacity of a single piece of fiber instantly increases. This technology is called dense wave division multiplexing (DWDM). The advent of DWDM technology created quite a hoopla, and references are frequently made to the increased capacity of a piece of fiber because of DWDM. In 1996 (when only a small handful of DWDM systems were initially deployed commercially), the maximum amount of information that could be transmitted through a piece of fiber was 2.5 gigabytes per second (Gb/s). In 2000, DWDM systems could shoot 40 wavelengths, each carrying 2.5 Gb/s, for a total of 100 Gb/s. In just four years, DWDMs alone increased the potential capacity of a piece of fiber by a factor of 40, well above the pace of Moore’s law. Figure 9.5 provides a simplified version of a long-haul fiber optic network (most expenditures on fiber and fiber optic equipment in the 1990s were on long-haul networks). The central circle in figure 9.5 is a fiber ring. Many networks are built using the ring concept—several fibers are used to connect two points so that data can be transmitted in either direction. If the ring is severed at one point, the data can be transmitted in the other direction. Various pieces of equipment are used to send and receive information over fiber. In the late 1990s and into 2000, fairly large expenditures were made on the various pieces of fiber optic equipment. Estimates of these expenditures are presented in table 9.5, and these estimates came from one of three different private firms: Gartner/Dataquest, RHK, and KMI. RHK is a private firm that tracks the telecommunications industry and KMI is also a private firm that specializes in tracking the fiber industry, especially the amount of fiber cable laid. The table shows that the growth rate in nominal spending on fiber optic equipment has increased at close to a 20 percent annual rate since 1994.17 17. The fiber optic cable is not classified as communications equipment according to the SIC and NAICS. Also, the cost of the fiber cable itself is relatively cheap. Estimates are that in 1999 and 2000, about $3 billion was spent annually on the cable. In contrast, expenditures on the equipment used to transmit and receive information over the cable topped $22 billion in 2000. Also, telecom service providers have to invest in other forms of equipment to get a fiber optic network up and going, including computers.
Fig. 9.5
Basic long-haul fiber optic network
Table 9.5
Nominal spending of fiber optic equipment ($ millions) AAGR (%)
1994
1995
1996
1997
1998
1999
2000
2001
Total fiber optic equipment Percent change
3,443
4,163 20.9
5,112 22.8
7,160 40.0
9,059 26.5
14,081 55.4
22,061 56.7
11,842 –46.3
19.3
Multiplexersa DWDMb DCCsc Other
1,865 0 1,111 467
2,406 0 1,182 576
2,656 381 1,455 620
3,562 1,595 1,581 422
4,556 1,854 2,210 439
7,291 3,427 2,749 614
10,573 6,740 3,950 799
4,880.0 4,348 1,914.0 700
14.7 62.7 8.1 5.9
Sources: RHK, Gartner, KMI. a Multiplexer (also known as a “mux”). Devices that can input multiple signals to produce a single signal, and devices that input a single signal and produce multiple signals. An add/drop mux selectively adds and/or drops wavelengths without having to use terminal equipment. A terminal mux extracts all signals/wavelengths from the fiber. b Dense wave division multiplexing (DWDM), also known as wavelength division multiplexing, transmits multiple lightwaves down a single piece of fiber. For instance, there are DWDM devices that can send and receive 40 different wavelengths, with each wavelength having a maximum capacity of 2.5 gigabytes per second. A DWDM system of this type could send 100 gigabytes per second. c Digital cross-connect (DCC) is a device that is able to perform simple extraction and merging onto a fiber ring.
344
Mark Doms
A large ramp-up in spending for fiber optic equipment occurred in 1999 and 2000 as a flood of companies rushed into the long-haul fiber optic business; at the end of 2000, there were eleven companies that had more than 10,000 route miles of fiber. Unfortunately, there was a tremendous amount of duplication on the main routes, and many of the companies went bankrupt. As a consequence, expenditures on fiber optic gear dropped precipitously in 2001.18 9.5.2 Prices of Fiber Optic Equipment To reiterate, getting information on prices for fiber optic gear is difficult because there are relatively few firms that produce the equipment, and the number of customers is fairly limited as well. Consequently, standard price catalogs do not exist. An analyst at RHK, Brian Van Steen, tracks prices and quantities for a large number of pieces of fiber optic gear, and he was kind enough to share his results for multiplexers and for DWDM equipment. The information for digital cross-connects came from Gartner. A series of tables in this section shows the data used in the analysis. Each table contains information on a different piece of fiber optic equipment, and price indexes were formed by using a matched-model, superlative index number approach. The reader can skip ahead several pages to the discussion of table 9.9 that presents the summary results for fiber optic equipment. Multiplexers Multiplexers vary in several dimensions, including the capacity of the signal they produce and their range: ultra long haul (more than 600 kilometers), long haul (60–600 kilometers), or metro (less than 60 kilometers). Through 2000, a plurality of expenditures on multiplexers was for the longhaul add/drop variety.19 Table 9.6 presents quantity, price, and revenue estimates of long-haul add/drop multiplexers by capacity. As shown at the bottom of table 9.6, between 1997 and 2001, prices fell an average of 15 percent, with the largest price declines in 2000 and 2001 for the higher-capacity models. In the earlier years (1996–98), prices fell at a more modest pace when, according to industry sources, competition was not very strong; although there are several producers of multiplexers, each market segment was relatively concentrated. However, in 2000 and 2001, prices fell especially fast, particularly for optical channel (OC) 48 devices,
18. The advent of DWDM equipment may have hastened the collapse of several long-haul fiber companies. Because DWDM has increased the potential capacity of a piece of fiber manyfold, when demand on a certain fiber route increases, it is relatively easy (that is, cheap) to increase the capacity of the existing line instead of lighting up another fiber. Therefore, not as many fibers are needed to transport a given amount of information. 19. Another characteristic is the protocol that multiplexers use. In the long-haul market, most networks use the synchronous optical network (SONET) protocol. In the metro market, other protocols are more common, like Ethernet.
Communications Equipment: What Has Happened to Prices? Table 9.6
345
Prices and quantities for long-haul add/drop multiplexers 1996
System units (thousands) OC3 (155.62 megabits per second) OC12 (622.08 megabits per second) OC48 (2,488.32 megabits per second) OC192 (9,953.28 megabits per second) Total Revenue ($ millions) OC3 OC12 OC48 OC192 Total Average unit values ($000s/unit) OC3 OC12 OC48 OC192 Change in prices (%) OC3 OC12 OC48 OC192 Average Matched model price index Percent change
1997
1998
1999
2000
2001
AAGR (%)
3.9
5.0
7.2
8.0
9.3
10.1
24.7
2.8
3.9
5.8
6.4
8.5
6.1
32.4
6.1
6.5
7.6
11.7
24.0
11.0
41.0
12.7
15.4
1.5 22.2
4.5 30.5
11.3 53.1
5.8 33.0
174.5 43.1
77.0 132.0 917.0
90.0 154.0 947.0
1,126.0
1,191.0
123.0 216.0 1,039.0 599.0 1,977.0
120.0 210.0 1,398.1 1,571.3 3,299.4
125.8 254.0 2,302.2 3,175.2 5,857.2
122.2 165.4 550.0 1,277.4 4,880.0
13.0 17.8 25.9 130.2 51.0
20.0 48.0 151.0
18.0 40.0 145.0
17.0 37.0 136.4 398.0
15.0 33.0 120.0 350.0
13.5 30.0 96.0 280.0
12.2 27.0 50.0 220.0
–9.5 –10.9 –19.8 –17.9
–10.0 –16.7 –4.0
–5.6 –7.5 –6.0 –6.1
–10.0 –9.0 –20.0 –20.0 –19.2
–10.0 –10.0 –47.9 –21.4 –30.3
–9.5 –10.9 –19.8 –17.9
–6.1
–11.8 –10.8 –12.0 –12.1 –11.9
0.94 –6.1
0.88 –6.1
0.78 –11.9
0.63 –19.2
0.44 –30.3
–15.2
1.00
Source: RHK.
in part because the market was flooded with devices from bankrupt firms and because of increased competition from Cisco. Additionally, as demand fell in 2001, producers cut their prices further. What problems might there be with these results? Generally, the quality of equipment within a capacity category is believed to improve over time. For instance, the size of multiplexers is shrinking and the amount of power they consume is also declining. If there is a bias in the results discussed below, like PBX equipment, it is understating price declines. Based on conversations with industry analysts, the bias is likely to be small since the importance of the unobserved characteristics seems small relative to the capacity of the machines.
346
Mark Doms
Table 9.7
Average prices for selected long-haul (DWDM) equipment, 1998–2001 1998
1999
2000
2001
AAGR (%)
Average prices for terminals ($000s/terminal) WDM Terminal 32 channel OC-48 open 1,316 796 652 32 channel OC-192 open 2,532 2,236 1,804 32 channel OC-48 integrated 516 636 524 32 channel OC-192 integrated 1,412 1,436 1,164 96 channel OC-48 open 3,684 2,076 1,676 96 channel OC-192 open 6,396 5,132 96 channel OC-192 integrated 3,996 3,212 160 channel OC-48 open 6,052 3,356 2,700 160 channel OC-192 open 10,556 8,460 160 channel OC-192 integrated 6,556 5,260 Average percent change –19.1 –19.2 Price index 1.00 0.81 0.65
457 1,346 367 866 1,173 3,842 2,402 1,890 6,338 3,938 –27.2 0.48
–29.7 –19.0 –10.7 –15.0 –31.7 –22.5 –22.5 –32.2 –22.5 –22.5
Average prices for channel cards ($000s/card) OC-48 XCVR transponder cards 37 20 16 OC-192 XCVR transponder cards 65 52 SONET OC-48 XCVR line cards 18 15 12 SONET OC-192 XCVR line cards 40 32 Average percent change –31.3 –20.0 Price index 1.00 0.69 0.55
11 39 8 24 –27.5 0.40
–32.9 –22.5 –22.4 –22.5
Actual expenditures on channel cards and terminals ($ millions) Expenditures on DWDM terminals 926 1,782 3,958 2,531 Expenditures on channel cards 928 1,645 2,782 1,818 Overall chain-weighted DWDM price index Percent change
1.00
0.75 –25.0
0.60 –19.5
0.44 –27.3
–21.9
–26.4 39.8 25.1 –24.0
Source: RHK. Note: For DWDM terminals, “open” means that the mux is not part of the box, whereas integrated means the mux and the DWDM are in the same unit.
DWDM Table 9.7 presents average prices for two types of DWDM gear: terminals and channel cards. Most DWDM terminals are designed so they can easily expand their capacity—that is, the number of wavelengths of light they can transmit. Each wavelength is also referred to as a channel. For each channel, a DWDM terminal needs a channel card. The channel card fits into the DWDM terminal much like expansion cards fit into PCs. If four channel cards are added to a terminal, then that terminal can transmit four different wavelengths simultaneously. As demand increases, additional channel cards are added as needed. Increasing capacity in this way can be done quickly. For instance, according to KMI, the average DWDM terminal had a maximum capacity of forty-two channels, but, on average, only twelve of those channels were used. The sales of cards are quite large— between 1998 and 2001, about 44 percent of DWDM sales was of cards.
Communications Equipment: What Has Happened to Prices?
347
The remainder of the expenditures was on the terminals themselves and other miscellaneous equipment. RHK has collected some information on prices for various pieces of DWDM gear, and those are presented in table 9.7. The first portion of table 9.7 presents price estimates for a sample of DWDM terminals. The terminals vary by the number of cards they can accept and the capacity of each card. Many other possible configurations exist in the market (such as fourchannel systems), but I don’t have any information on those. Between 1998 and 2001, prices for this set of terminals fell an average of almost 22 percent. The bottom portion of table 9.7 presents average prices for a sample of channel cards. Prices for channel cards fell an average of 26 percent. Digital Cross-Connects (DCCs) The smallest of the three fiber optic components in terms of expenditures in 2000 were DCCs. Again, the data used to examine prices in this category are akin to the data for multiplexers and for PBX systems. Therefore, the argument that the estimated price declines are biased upward holds for these categories as well. The data came from Gartner/Dataquest for three types of DCCs: narrowband, wideband, and broadband. The difference between these groups is basically the capacity of the circuits they can tap into. Table 9.8 shows quantities, revenues, and prices between 1994 and 2000. The far right column shows that price declines for DCCs were generally in the single digits. 9.5.3 Summary of Fiber Optic Prices Table 9.9 presents the summary measures for prices, nominal expenditures, and real expenditures for fiber optic equipment. The price indexes are those that were derived in the previous tables. I extrapolated the price indexes for several years where needed, and those extrapolations are described at the bottom of the table. Since 1994, fiber optic equipment prices fell an average of 12.4 percent, with the sharpest declines in 2000 and 2001. This result goes against the perception that prices fell dramatically (more dramatically than 12.4 percent a year, at least) for fiber optic equipment, the poster child of rapid innovation. However, digging beneath the surface yields some important clues as to why prices fell only an average of 12.4 percent. First, notice that fiber optic equipment prices fell just a bit more than 5 percent a year between 1994 and 1996 (as shown in the top portion of table 9.9). The relatively slow decline in prices in these years is attributable to the relatively mild declines in DCCs (and the relatively large nominal share of DCCs in those years) and the assumed relatively slow decline in prices for multiplexers. Keep in mind that the price indexes for multiplexers in those years were based on prices falling 6.1 percent, an ad hoc assumption I made. I made this assumption because prices in 1997 and 1998 were estimated to have fallen by that amount. Prices for multiplexers fell much faster in later
348
Mark Doms
Table 9.8
Quantities, revenues, and prices for digital cross connects (DCCs) 1994
1995
1996
1997
1998
1999
2000
Units (thousands) Narrowband (T1 ports) 440.3 429.9 452.5 419.5 490.9 601.6 902.4 Wideband (T3 ports) 31.5 38.2 48.5 97.0 188.7 209.2 311.7 Broadband (T3 ports) 44.5 51.8 79.1 52.2 61.6 74.6 110.4 Revenue ($ millions) Narrowband 367.7 363.0 382.0 346.1 402.4 490.3 606.6 Wideband 350.0 401.1 487.0 916.7 1,722.0 1,903.7 2,504.7 Broadband 376.2 394.0 550.1 339.2 388.0 448.0 601.0 Total 1,093.9 1,158.1 1,419.1 1,602.0 2,512.4 2,842.0 3,712.3 Average price ($000s/unit) Narrowband 0.84 0.84 0.84 0.83 0.82 0.81 0.67 Wideband 11.11 10.50 10.04 9.45 9.13 9.10 8.04 Broadband 8.45 7.61 6.95 6.50 6.30 6.01 5.44 Change in prices (%) Narrowband 1.1 0.0 –2.3 –0.6 –0.6 –17.5 Wideband –5.5 –4.4 –5.9 –3.4 –0.3 –11.7 Broadband –10.0 –8.6 –6.6 –3.1 –4.7 –9.4 Total –4.9 –4.7 –5.2 –2.9 –1.0 –12.3 DCC price index
1.00
0.95
0.91
0.86
0.83
0.83
0.72
AAGR (%)
12.7 46.5 16.4 8.7 38.8 8.1 22.6 –3.6 –5.3 –7.1 –3.6 –5.3 –7.1 –5.2 –5.2
Source: North American Transmission Equipment Market Share and Forecast, 1999, Gartner, October 11, 1999 Notes: Narrowband—DCC equipment designed to electronically cross-connect and manage DS0s (64 Kbps); wideband—DCC equipment designed to electronically cross-connect and manage DSs or T1s; broadband—DCC equipment designed to electronically cross-connect and manage DS3s/T3s or SONET OC1 circuits.
years as competition and the supply of multiplexers in the second-hand market increased. Industry conditions in the early to mid-1990s were reportedly much more like the conditions in 1997 and 1998 than the conditions in 2000 and 2001. The second reason why prices for fiber optic equipment didn’t fall faster over the 1994–2001 period is that DWDM equipment did not make a significant presence (in terms of nominal expenditures) until 1997. As the nominal share of expenditures moved toward DWDM equipment, the decline in the overall fiber optic price index accelerates. In 2001, DWDM accounted for 37 percent of all fiber optic equipment. The bottom of table 9.9 combines the nominal expenditures and the price data to compute indexes of real spending. All of the real expenditures are expressed in 1998 millions of dollars. Between 1994 and 2001, real expenditures on fiber optic equipment increased at an average annual rate of 36.2 percent.20 In 1999 and 2000, real expenditures on fiber optic equip20. Real computer investment increased at an annual rate of 36.5 percent during this period.
Communications Equipment: What Has Happened to Prices? Table 9.9
349
Summary indexes for fiber optic equipment
1994
1995
1996
1997
1998
AAGR (%)
1999
2000
2001
14,081 55.4 7,291 3,427 2,749
22,061 56.7 10,573 6,740 3,950
11,842 –46.3 4,880 4,348 1,914
19.3
0.87 –12.6 0.88 0.75 0.99
0.72 –17.9 0.71 0.60 0.87
0.54 –25.2 0.50 0.44 0.80
–12.4
16,112 77.9 8,277 4,571 2,777
30,762 90.9 14,847 11,172 4,550
22,089 –28.2 9,835 9,914 2,396
36.2
Nominal spending ($ millions) Total fiber optic equipment Percent change Multiplexers DWDM DCC
3,443 1,865 0 1,111
4,163 20.9 2,406 0 1,182
5,112 22.8 2,656 381 1,455
7,160 40.0 3,562 1,595 1,581
9,059 26.5 4,556 1,854 2,210
14.7 41.6 8.1
Price indexes (1998 5 1.0) Total fiber optic equipment Percent change Multiplexers DWDM DCC
1.36 1.29
1.28 –5.7 1.21
1.20
1.14
1.21 –5.3 1.13 1.73 1.09
1.10 –9.0 1.07 1.32 1.03
1.00 –9.3 1.00 1.00 1.00
–12.7 –17.8 –5.6
Real spending (1998 $ millions) Total fiber optic equipment Percent change Multiplexers DWDM DCC
2,536 1,449
3,251 28.2 1,991
928
1,037
4,217 29.7 2,341 220 1,340
6,491 53.9 3,343 1,212 1,535
9,059 39.6 4,556 1,854 2,210
31.5 72.3 14.5
Notes: It is assumed that prices for multiplexers fell an average of 6.1 percent in between 1994 and 1996, the same rate that was estimated for 1996 to 1998. The price index for dense wavelength division multiplexing (DWDM) equipment is assumed to fall 24 percent, the average rate from 1998 to 2001. The price index for digital cross-connects (DCC) in 2001 is assumed to fall 8.0 percent, a slightly faster rate than in previous years because of increased industry competition.
ment grew at furious paces, in part due to technological advances and the large build-out of many long-haul networks. In 2001, nominal expenditures plunged over 46 percent, but prices also fell quickly, so that in real terms, expenditures fell a little over 28 percent. 9.6 Price Indexes for Overall Communications Equipment The previous sections presented results on the extent to which prices for various components of communications equipment changed during the 1990s. In this section, those results are used to derive three price indexes for overall communications equipment. The indexes vary by the assumptions that underlie them. The first index is based on very conservative assumptions about price changes and the third index is based on aggressive assumptions about price changes. The “truth” likely lies between the two extreme cases. The three indexes are located on the bottom of tables 9.12, 9.13, and 9.14.
350 Table 9.10
Mark Doms Communications equipment: Spending and production ($ millions) 1994
1995
1996
1998
1999
77,444 8,167
86,344 8,331
99,000 113,855 8,420 8,456
13.7 1.2
69,277
78,013
90,581 105,399
15.0
97,461 103,176 110,710 124,249 73,700 81,200 93,300 116,800 15,017 15,771 16,919 18,983 8,167 8,331 8,420 8,456 17,683 18,209 20,008 23,284 17,105 20,335 27,937 43,274
8.4 16.6 8.1 1.2 9.6 36.3
72,180 78,510 85,186 96,883 105,302 118,639 144,239 27,606 27,289 28,058 38,839
14.2 12.1
48,052 58,499 68,232 82,852 48,440 55,431 63,968 78,142
12.9 11.4
Categories from table 9.3 1. Total 2. Consumer 3. Business and government spending GDP accounting 4. Communications equipment GDP (5 6 7 8 9) 5. Equipment and software 6. Government 7. Consumer spending 8. Exports 9. Imports Memo: 10. Domestic spending (5 6 7) 11. (10 – 1) ASM production information 12. Industry shipments 13. ASM product shipments
69,598 54,743 12,914 4,523 11,237 13,819
77,401 60,019 13,533 4,958 13,762 14,871
83,740 65,609 14,157 5,420 14,469 15,915
86,119 81,932
2000
1997–2000 AAGR (%)
1997
97,953 119,329 92,715 107,921
Note: ASM Annual Survey of Manufacturers.
9.6.1 Aggregate Spending and Production Accurate spending figures are needed to derive aggregate deflators for communications equipment. Table 9.10 summarizes spending and production of communications equipment. The top panel of numbers in table 9.10 is from table 9.3, spending on communications equipment for certain categories I obtained largely independent of government sources. The second panel of numbers in table 9.10 shows the official numbers on communications equipment spending.21 Using the E&S, government, and consumer spending figures, total domestic spending on communications equipment grew at an average rate of 14.2 percent between 1997 and 2000, only half a percentage point faster than the figures from table 9.3. The bottom portion of table 9.10 shows estimates of production of com21. The government and consumer spending figures are not published on a regular basis and, in fact, were last published for 1992. The government expenditure figures were extrapolated to 2000 by assuming the growth rate was half that of the E&S series. The consumer spending numbers are based on those I derived in table 9.3 and on the 1992 input-output tables. Private-sector spending on communications gear likely grew much faster than government spending during the 1990s. As a result of the Telecommunications Act of 1996, a flood of entrants entered into the telecommunication service industry. Also, as stated previously, there was a surge in spending on fiber optic equipment that was tied to the build-out of privately owned long-haul fiber networks.
Communications Equipment: What Has Happened to Prices?
351
munications equipment, a concept similar to the GDP figures in line 4. Surprisingly, industry shipments and product shipments increased at faster rates than the GDP figures. Why this is the case is unclear. Overall, the point to take away from table 9.10 is that several independent sources concur that domestic spending on communications equipment increased at a fairly rapid clip in the late 1990s and that production of communications equipment also increased quickly. Further, the spending estimates from table 9.3 account for a majority of all communications equipment spending. 9.6.2 Constructing Overall Communications Equipment Price Indexes I employ a bottom-up approach to construct overall price indexes for communications equipment—the estimates of price indexes for various components of communications equipment are chain weighted. The assumptions made about prices for the conservative, moderate, and aggressive scenarios are displayed in table 9.11. Tables 9.12 through 9.14 show the results based on these assumptions. Conservative Assumptions Lines 1 through 6 in the top portion of table 9.12 show the price indexes used in place of the PPI for a select group of products. For the conservative assumptions, only those price indexes are used that were discussed earlier in this paper. That set of results includes previous results on cell phones, central office switching equipment, and LAN equipment. Added to these three sets of results are those that are developed in this paper—fiber optic equipment, PBXs, and modems. For all other communications equipment, the conservative approach uses the PPI for communications equipment.22 These price indexes in lines 1 through 6 are chain weighted using the weights in lines 7 through 12. The aggregate price index for these products, line 15, fall an average of 14.0 percent between 1994 and 2000, with the fastest price declines occurring in the later three years. In contrast, the overall PPI for communications equipment (line 19) falls at a 0.4 percent annual rate. The price index for the special products does not include the prices for products not listed in lines 1 through 6. The conservative approach assumes that prices for all other communications equipment follow the PPI. Line 22 shows the overall price index for communications equipment under the conservative assumptions—prices fall an average of 6.4 percent, 6 percentage points faster than the PPI and 3.5 percentage points faster than BEA’s communications equipment price index (line 24). Recall that the 22. Ideally, each product would be linked to its own PPI. However, the concordance between my system and the SIC was difficult to construct. Also, most PPIs within SIC 366 do not show much change over the late 1990s, so even if I were to use a more refined concordance, the overall results would not change very much.
Section 9.4 results
Section 9.4 results: –1% per year
Section 9.4 results 3/4 of Doms and Forman
Section 9.4 results PPI
Private branch exchange (PBX)/Key telephone system (KTS)
Doms and Forman
Doms and Forman
Section 9.4 results: –2% per year
Section 9.4 results 3/4 of Doms and Forman
Doms and Forman
Section 9.5 results –10% per year, 2/3 that of cellular phones
Section 9.5 results –7.5% per year, half that of cellular phones
Local area network (LAN) equipment Modems Other data communications (WAN, ISDN, ATM, frame relay, and the like)
BEA
Aggressive
BEA
Moderate
BEA (16.1% decline per year) Section 9.5 results PPI
Conservative
Three sets of assumptions for prices, by equipment type
Central office switching and transmission equipment Fiber optic equipment Cellular phone infrastructure
Equipment type
Table 9.11
The prices for equipment in this category probably do not fall as fast as LAN equipment because the market is more dispersed and competition is not as great. It is likely the results in section IV understate the actual price declines, so the moderate and aggressive assumptions make adjustments.
There has been continuous improvement in cell phone technology over the past decade, and there has been increased competition in the base station equipment market.
Comments
PPI: –2% per year
PPI
PPI: –4% per year
Hausman Decline of 5% per year
–2% per year
There have been improvements in cable TV technology, including the widespread upgrades to digital cable. The moderate and aggressive cases assume that the PPI for other categories is mismeasured. However, the largest components in this group are not the ones where technology change has been as rampant as other categories. Therefore, the aggressive case has the PPI falling less than half as fast as the aggregate price index for the above listed items.
There have been large improvements in technology in this category during the 1990s, especially in the technology used in call centers.
Note: BEA Bureau of Economic Analysis; PPI producer price index; ISDN integrated services digital network; ATM asynchronous transfer mode.
Hausman Decline of 5% per year
Hausman PPI
Wireless handsets (cell phones) Capital equipment spending by cable television firms not covered above Communications equipment not covered by above categories
Section 9.4 results for Section 9.4 results:
PPI PBX: –1% per year
Voice processing equipment (call centers, voice mail, and the like)
Note: PPI producer price index.
24. Bureau of Economic Analysis communications equipment price index 25. Percent change
Construction of business and government price index 19. PPI for communications equipment 20. Percent change 21. Business and government spending net of special products 22. Conservative communications equipment price index 23. Percent change
Aggregate price indexes for above products 15. Total 16. Consumer products 17. Business and government investment 18. Percent change
Spending on the above products 7. Central office switching and transmission equipment 8. Fiber optic equipment 9. LAN equipment 10. Modems 11. PBX/KTS 12. Wireless handsets (cell phones) 13. Spending on special products by consumers 14. Spending on special products by business and government
1.00
38,837 1.00
1.00
1.00 1.00 1.00
9,677 3,443 5,828 2,500 5,890 2,963 1,481 28,820
1.00 1.00 1.00 1.00 1.00 1.00
1994
0.96 –4.0
1.01 0.5 48,621 0.96 –4.5
0.89 0.85 0.89 –11.3
9,810 4,163 7,808 3,000 6,105 3,704 1,852 24,930
0.84 0.94 0.84 0.83 1.03 0.85
1995
0.94 –2.5
1.01 0.9 51,057 0.92 –3.6
0.79 0.72 0.79 –10.9
11,537 5,112 10,502 3,500 6,245 4,630 2,315 28,709
0.70 0.89 0.77 0.72 0.96 0.72
1996
0.93 –1.0
1.02 0.6 56,397 0.87 –5.8
0.67 0.61 0.68 –14.6
12,358 7,160 11,437 3,077 6,832 5,787 2,894 32,320
0.59 0.81 0.64 0.51 0.90 0.61
1997
0.89 –4.1
1.01 –0.6 59,845 0.80 –7.7
0.56 0.52 0.57 –16.3
13,562 9,059 13,111 3,290 7,601 7,228 3,614 37,126
0.50 0.74 0.46 0.41 0.86 0.52
1998
0.86 –3.6
1.00 –1.7 67,074 0.74 –8.1
0.48 0.44 0.48 –15.3
14,475 14,081 14,027 2,800 7,980 7,619 3,810 43,146
0.42 0.64 0.38 0.29 0.79 0.44
1999
0.83 –2.8
0.97 –2.3 83,483 0.67 –8.4
0.40 0.38 0.41 –15.2
15,500 22,061 15,838 2,500 8,392 7,692 3,846 52,299
0.35 0.53 0.33 0.20 0.75 0.38
2000
–3.0
13.6 –6.4
–0.4
–14.0 –15.0 –13.9
8.2 36.3 18.1 0.0 6.1 17.2 17.2 10.4
–16.1 –10.1 –17.0 –23.3 –4.8 –15.0
AAGR (%)
Price indexes for communications equipment—Conservative assumptions (all price indexes 1.00 in 1994, all spending figures in $ millions)
Price indexes used instead of the PPI 1. Central office switching and transmission equipment 2. Fiber optic equipment 3. Local area network (LAN) equipment 4. Modems 5. Private branch exchange (PBX)/Key telephone system (KTS) 6. Wireless handsets (cell phones)
Table 9.12
Communications Equipment: What Has Happened to Prices?
355
BEA index already makes an adjustment for two of the six product categories in lines 1 through 6—LAN equipment and central office switching and transmission equipment. Moderate Assumptions The next set of assumptions about prices builds upon the conservative set of assumptions. The first additional assumption made is that prices for cellular phone infrastructure fall half as fast as cell phones. There has been continuous improvement in cell phone technology over the past decade, especially as new generations of equipment have been deployed. There has been a progression from AMPS to iDEN to GSM and CDMA.23 Each successive technology is better than the previous one, and this trend is likely to continue. Another assumption made is that prices for “other data communication” fall about three-quarters as fast as LAN equipment. This category includes WAN equipment, frame relay, ATM, and other components needed to run large data networks. Technology in these areas has improved over the years. Another modification made within the set of moderate assumptions is that PBX prices fall 1 percent more per year than the results in section 9.4. As stated previously, the PBX results are likely to be biased upward. Additionally, I assume that prices for voice-processing equipment follow a similar pattern. There have been large improvements in voice-processing technology, especially in the technology used by call centers. Finally, for communications equipment not covered by the abovementioned categories, I assume that prices fall at an average annual rate that is 2 percentage points less than the official PPI for communications equipment. In each and every instance that prices for communications equipment have been examined, it has been found that prices fall faster than the PPI. Therefore, assuming that prices fell faster than the officially published numbers by a modest amount seems reasonable. As shown in table 9.13, under these assumptions, prices for communications equipment fell at an 8.3 percent annual rate, more than one-third as fast as the BEA computer price index and over 5 percentage points faster than the BEA communications equipment price index. Aggressive Assumptions Finally, table 9.14 presents results under the aggressive assumptions. The aggressive assumptions are laid out in table 9.11. The biggest difference 23. The Advanced Mobile Phone Service (AMPS) was an analog system developed by Bell Labs in the 1970s. The remaining technologies are digital. iDEN, developed by Motorola, uses time-division multiple access (TDMA) technology. The two digital technologies that are the fastest growing in the United States are code division multiple access (CDMA) and the global system for mobile (GSM) communications.
1.00
1.00
1.00
0.84 –16.1
0.94 –6.4 0.96 –4.0
0.99
1.00
0.64 –24.0
0.88 –5.8 0.94 –2.5
0.97
0.83 –8.8
28,709 8,834 4,537 5,162 3,091 50,332
0.79 0.83 0.96 0.86 0.90
1996
0.49 –22.5
0.81 –8.1 0.93 –1.0
0.96
0.73 –12.0
32,320 9,620 4,963 5,678 3,400 55,981
0.67 0.72 0.90 0.79 0.86
1997
Note: PPI producer price index; PBX private branch exchange; BEA Bureau of Economic Analysis.
19. BEA computer price index 20. Percent change
15. Moderate communications equipment price index 16. Percent change 17. BEA communications equipment price index 18. Percent change
0.91 –9.4
24,930 6,567 4,435 4,693 2,810 43,435
0.88 0.88 1.03 0.93 0.95
1995
1.00
28,820 4,902 4,279 4,266 2,554 44,821
Spending on special products 6. Items from conversative assumptions page 7. Other data communication 8. Voice-processing equipment (call centers, voice mail, etc.) 9. Cellular phone infrastructure 10. Cable TV equipment 11. Spending on special products by business and government
Aggregate price indexes for special products 12. Business and government investment 13. Percent change Construction of business and government price index (assume PPI falls 2% per year faster than actual) 14. Modified PPI for communications equipment
1.00 1.00 1.00 1.00 1.00
1994
0.36 –26.4
0.73 –10.2 0.89 –4.1
0.94
0.62 –14.5
37,126 11,344 5,626 4,474 3,850 62,420
0.56 0.57 0.86 0.73 0.81
1998
0.28 –23.8
0.66 –9.7 0.86 –3.6
0.90
0.54 –12.9
43,146 13,479 6,215 4,955 5,400 73,195
0.47 0.49 0.79 0.68 0.77
1999
0.24 –13.6
0.60 –9.3 0.83 –2.8
0.86
0.48 –12.1
52,299 15,532 6,743 4,734 6,000 85,308
0.40 0.44 0.75 0.63 0.74
2000
–21.2
–3.0
–8.3
–2.4
–11.6
10.4 21.2 7.9 1.8 15.3 11.3
–14.2 –12.7 –4.8 –7.5 –5.0
AAGR (%)
Price indexes for communications equipment—Moderate assumptions (all price indexes 1.00 in 1994, all spending figures in $ millions)
Price indexes used instead of the PPI Items from conversative assumptions page with adjustment for: 1. PBX prices falling an additional 1% per year 2. Other data communication 3. Voice-processing equipment (call centers, voice mail, etc.) 4. Cellular phone infrastructure 5. Cable TV equipment
Table 9.13
1.00 1.00
17. BEA communications equipment price index 18. Percent change
19. BEA computer price index 20. Percent change
Note: See table 9.13 notes.
1.00
1.00
1.00
44,821
15. Aggressive communications equipment price index 16. Percent change
Construction of business and government price index (assume PPI falls 4 percent per year faster than actual) 14. Modified PPI for communications equipment
Aggregate price indexes for special products 12. Business and government investment 13. Percent change
Spending on special products 11. Spending on special products by business and government
1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
1994
0.84 –16.1
0.96 –4.0
0.92 –7.9
0.97
0.90 –10.0
51,242
0.84 0.94 0.90 0.84 0.83 0.88 1.01 1.01 0.85 0.95
1995
0.64 –24.0
0.94 –2.5
0.85 –8.0
0.94
0.81 –9.8
60,834
0.70 0.89 0.81 0.77 0.72 0.83 0.92 0.92 0.72 0.90
1996
0.49 –22.5
0.93 –1.0
0.76 –10.8
0.90
0.71 –13.2
67,418
0.59 0.81 0.73 0.64 0.51 0.72 0.85 0.85 0.61 0.86
1997
0.36 –26.4
0.89 –4.1
0.66 –13.0
0.86
0.60 –15.5
75,531
0.50 0.74 0.66 0.46 0.41 0.57 0.79 0.79 0.52 0.81
1998
0.28 –23.8
0.86 –3.6
0.58 –12.2
0.81
0.51 –14.0
87,222
0.42 0.64 0.59 0.38 0.29 0.49 0.71 0.71 0.44 0.77
1999
0.24 –13.6
0.83 –2.8
0.51 –11.6
0.76
0.44 –13.3
101,146
0.35 0.53 0.53 0.33 0.20 0.44 0.66 0.66 0.38 0.74
2000
–21.2
–3.0
–10.6
–4.4
–12.6
14.5
–16.1 –10.1 –10.0 –17.0 –23.3 –12.7 –6.8 –6.8 –15.0 –5.0
AAGR (%)
Price indexes for communications equipment—Aggressive assumptions (all price indexes 1.00 in 1994, all spending figures in $ millions)
Price indexes used instead of the PPI 1. Central office switching and transmission equipment 2. Fiber optic equipment 3. Cellular phone infrastructure 4. Local area network (LAN) equipment 5. Modems 6. Other data communication 7. Private branch exchange (PBX)/Key telephone system (KTS) 8. Voice-processing equipment (call centers, voice mail, etc.) 9. Cellular phones 10. Cable TV equipment
Table 9.14
358
Mark Doms
between the aggressive and moderate assumptions is that the PPI is assumed to fall 4 percentage points faster than the official PPI. Under the aggressive assumptions, prices for communications equipment fall at a 10.6 percent annual rate, about 2 percentage points faster than the moderate assumptions. 9.7 Investment and MFP In this section we address the implications of the mismeasurement of communications equipment prices. Specifically, we examine how growth rates in various investment aggregates would change if the alternative price indexes for communications equipment are used. The calculations are performed for investment in communications equipment, informationprocessing equipment, and for total nonresidential investment. One reason we might be interested in how the growth rates of real investment change as a result of new deflators is that it may alter the results on capital deepening and MFP. Later in this section we address this issue. Basically, the real growth rates of various investment categories will increase as a result of using the price indexes in the previous section. What influences the effect of new communications equipment deflators on various aggregates is the nominal share of communications equipment of the aggregates. Roughly speaking, the effect of altering the price index for communications equipment is proportional to the nominal share of communications equipment. Between 1994 and 2000, communications equipment made up approximately 0.25 percent of all investment in information processing equipment and software and 10 percent of all nonresidential investment. How the growth rates for real communications equipment, informationprocessing equipment, and total nonresidential investment are affected by the conservative, moderate, and aggressive price indexes is displayed in table 9.15. The figures in table 9.15 reflect how much the average growth rate for those series would increase under the various assumptions about communications equipment pricing. Table 9.15
Conservative Moderate Aggressive
Changes in average annual growth rates for real investment using various price indexes for communications equipment (increase in average annual growth from 1994 to 2000)
Communications equipment
Information processing equipment
Nonresidential investment
4.1 6.6 9.8
0.9 1.5 2.2
0.4 0.6 0.9
Communications Equipment: What Has Happened to Prices?
359
Fig. 9.6 Changes in the real growth rate in nonresidential investment under differing assumptions for communications equipment prices
As shown in the bottom left of the table, if the aggressive deflators are used, then the average annual growth rate for real investment in communications equipment increases by 9.8 percentage points between 1994 and 2000. The effects of the aggressive deflator on information-processing equipment and on nonresidential investment are smaller, but still considerable. Even if the conservative deflator is used, real nonresidential investment would be boosted by 0.4 percentage point per year. The results in table 9.15 are averages over 1994 and 2000. Figure 9.6 shows the data on an annual basis for nonresidential investment. Each of the lines corresponds to one of the three assumptions about communications equipment prices. As shown, the changes to the deflators have the largest effects for 1997–2000, the period in which nominal spending on fiber optic equipment grew rapidly. In a nutshell, adopting deflators for communications equipment that more likely reflect actual price movements would have nontrivial effects on a variety of investment categories. One reason the growth rate in real investment is of interest is the relationship between real investment, the capital stock, and productivity. This is especially true of the acceleration of productivity in the second half of the 1990s, when current estimates suggest that there was a significant contribution from capital deepening and a significant contribution from acceleration in MFP. One way in which the changes in the growth rate of communications equipment feed into changes in productivity is the output effect; the growth rate of overall output is increased because the growth
360
Mark Doms
rate of one of the parts has increased. The output effect is small given the small share of communications equipment to total output; the average share of communications equipment to total GDP between 1994 and 2000 is about 0.9 percent, and about 1.1 percent of total nonfarm business output. Even if the aggressive results are used, GDP growth would be boosted by less than 0.1 percent per year. The other way in which the deflators affect productivity is through the contribution to the growth rate of capital services; a higher growth rate in capital services will shift some of the increase from MFP growth toward capital deepening. One way to estimate the effect changes in capital deepening is by examining the product of the capital income share of communications equipment and the change in the real growth rate in communications equipment. Daniel Sichel has estimated that the communication equipment share during the later half of the 1990s is a bit less than 2 percent. The moderate deflators for communications equipment would boost real communications equipment growth by an average of 6.5 percent, resulting in capital deepening increasing by a bit more than 0.1 percentage point per year. To put this result in some context, between 1997 and 2000, Daniel Sichel estimates that MFP grew at a 1.1 percent average rate; the moderate results in this paper on communications equipment would likely lower that estimate to closer to 1.0 percent. If the more aggressive assumptions about communications equipment prices were used, then MFP growth would be lowered to closer to 0.9 percent per year. 9.8 Conclusion The view that statistical agencies have difficulty in measuring prices for classes of goods that exhibit rapid technological change has long been espoused. Measuring prices for such goods poses special problems, and extra effort has to be employed. For instance, there is now a large and rich literature on measuring prices for computers, and the results of this research have been folded into the national accounts. Surprisingly, relatively little work on more accurately measuring prices has been done in the area of communications equipment, an area where nominal expenditures have been very close to those of computers and an area where certain segments have enjoyed rapid technological change. There are several reasons for this, including the large diversity of products within communications equipment and the difficulty in obtaining prices. This paper has attempted to derive measures of prices for communications equipment by gathering information on prices for many of the individual pieces of communications equipment. Some of the price series were poached from previous research, some others were developed more fully in this paper, and some assumptions have been made about the remainder. We estimate that prices for communications equipment likely fell anywhere
Communications Equipment: What Has Happened to Prices?
361
from 5.5 to 10.5 percent on average between 1994 and 2000, significantly faster than the PPI and the BEA deflator. On the one hand, these results are somewhat akin to the price profiles for computers in that they show steady declines and that the declines accelerated in the late 1990s. On the other hand, the price declines for communications equipment appear to be less than half of those of computers. We did find that prices for some components of communications equipment fall very fast, especially for fiber optic equipment. However, fiber optic equipment makes up a small share of total communications equipment, and so the influence it has on the larger communications aggregate is muted. Our results are also roughly consistent with several independent sources of information on technological change and price declines. First, Aizcorbe, Flamm, and Khurshid find that the prices for semiconductors that go into communications equipment do not fall as quickly as prices of chips that go into computers. Further, semiconductor content in computers is higher. Second, figure 9.2 showed that the number of patents awarded to communications equipment was much less than the number awarded for computers in the late 1990s. One can speculate about other reasons that prices for communications equipment do not fall as quickly as computers, such as the fact that the communications equipment industry is much more fragmented than that for computer hardware. Achieving economies of scale or proceeding up learning curves, as are important for computers and semiconductors, do not appear to happen often within communications equipment, with the exception of cell phones and LAN cards. The implications for the results in this paper are that real investment in certain categories was much higher than actually reported. For instance, real nonresidential investment would be from 0.5 to 1.0 percentage point higher between 1997 and 2000 if the deflators presented in this paper were used. This higher rate of real investment also feeds into higher rates of capital accumulation, resulting in shifting a small portion of the growth in MFP in the late 1990s to capital deepening.
References Aizcorbe, A., C. Corrado, and M. Doms. 2003. When do matched-model and hedonic techniques yield similar price measures. Federal Reserve Bank of San Francisco Working Paper. Aizcorbe, A., K. Flamm, and A. Khurshid. 2002. The role of semiconductor inputs in IT hardware price decline: Computers vs. communications. Finance and Economics Discussion Series Working Paper 2002-37. Washington, DC: Federal Reserve Board.
362
Mark Doms
Doms, Mark, and Christopher Forman. 2003. Price for LAN equipment. San Francisco Federal Reserve Working Paper no. 2003-15. Flamm, Kenneth. 1989. Technological advance and costs: Computers versus communications. In Changing the rules: technological change, international competition, and regulation in communications, ed. Robert Crandall and Kenneth Flamm, 13–61. Washington, DC: Brookings Institution. Forman, C., and M. Doms. 1999. Price declines and consumer welfare benefits in computer networking equipment. Kellogg School of Management, Northwestern University. Working Paper. Grimm, Bruce. 1996. A quality adjusted price index for digital telephone switches. Working Paper. Washington, DC: Bureau of Economic Analysis. Hausman, Jerry. 1999. Cellular telephone, new products, and the CPI. Journal of Business and Economic Statistics 17 (2): 188–94. Jorgenson, Dale, and Kevin Stiroh. 2000. Raising the speed limit: U.S. economic growth in the information age. Brookings Papers on Economic Activity Issue no. 1:125–211. Shapiro, M., and D. Wilcox. 1996. Mismeasurement in the Consumer Price Index: An evaluation. In NBER macroeconomics annual, ed. B. S. Bernanke and J. J. Rotemberg, 93–142. Cambridge: MIT Press. Trajtenberg, M. 1989. Economic analysis of product innovation: The case of CT scanners. Cambridge, MA: Harvard University Press. Triplett, J. 1989. Price and technical change in a capital good: A survey of research on computers. In Technology and capital formation, ed. D. Jorgenson and R. Landau, 127–213. Cambridge: MIT Press.
10 Information-Processing Equipment and Software in the National Accounts Bruce T. Grimm, Brent R. Moulton, and David B. Wasshausen
10.1 Introduction In the U.S. national income and product accounts (NIPAs), most of the types of goods in the investment category “information-processing (IP) equipment and software” have experienced rapidly changing technology and are thus candidates for inclusion in the new economy. The NIPA price indexes for computers and peripheral equipment, computer software, and communication equipment all, at least in part, include quality adjustments based on hedonic studies. In addition, anecdotal evidence strongly indicates that instruments have also undergone substantial quality improvements, although no hedonic quality adjustments are currently being made to their prices. Together, these goods make up more than nine-tenths of the category. There is also some evidence that there have been substantial quality improvements for the remaining two types of goods in the category, photocopy and related equipment and office and accounting equipment. Table 10.1 shows the shares of the components in the category for 1996, the reference (base) year for the NIPAs. Several recent studies have found that goods in this investment category have had significant roles in an acceleration in both real gross domestic product (GDP) and labor productivity in the second half of the 1990s. For Bruce T. Grimm is an economist with the Bureau of Economic Analysis. Brent R. Moulton is the associate director for National Economic Accounts for the Bureau of Economic Analysis and a member of the executive committee of the Conference on Research in Income and Wealth. David B. Wasshausen is an economist with the Bureau of Economic Analysis. The authors wish to thank the following people for providing valuable feedback and suggestions: Barry Bosworth, Jeff Crawford, Charlie Funk, Paul Lally, Steve Landefeld, Carol Moylan, Eugene Seskin, and the editors. Views expressed are those of the authors and do not necessarily reflect those of the Bureau of Economic Analysis.
363
364
Bruce T. Grimm, Brent R. Moulton, and David B. Wasshausen
Table 10.1
Components of IP equipment and software, 1996
IP equipment and software Computers and peripheral equipment Software Communications equipment Instruments Photocopy and related equipment Office and accounting equipment
Billions of dollars
Percent of total
287.3
100.0
70.9 95.1 65.6 33.3 14.7 7.8
24.7 33.1 22.8 11.6 5.1 2.7
Source: NIPA table 5.9.
example, Nordhaus (2001) found more than one-third of the acceleration in labor productivity in 1996–98 versus 1978–95 was due to new economy production, defined as output of machinery, electrical equipment, telephone and telegraph equipment, and software. Similarly, Gordon (1999, 2000) found a sharp acceleration in labor productivity in durable goods manufacturing, and even more sharply in computers manufacturing, and much weaker accelerations in other parts of the business sector for the period 1995Q4 to 1999Q4 versus 1972Q2 to 1995Q4. Jorgenson and Stiroh (2000) found that an acceleration in productivity growth was driven by information technology in the late 1990s compared to the early 1990s. Oliner and Sichel (2000) found that the sum of the contributions of the services of information technology capital and multifactor productivity in computer production and computer-related semiconductors production accounted for about two-thirds of the acceleration in labor productivity for the period 1996–99 compared to 1991–95. Some researchers have urged that additional work be done, particularly on price estimates for equipment based on semiconductors and other rapidly advancing technologies. For example, Jorgenson (2001) argued that the Bureau of Economic Analysis’s (BEA) price indexes for own-account and custom software present a distorted picture because they are partly based on programmer wages and do not allow for improvements in the productivity of computer programmers. Further, he has argued that some communications equipment, particularly transmission gear, has rates of progress that outstrip semiconductors, and that more work is needed to adequately adjust for these improvements in quality. The BEA’s strategic plan identifies several initiatives that are designed to improve the estimation of IP equipment and software and the other components of GDP (Landefeld 2002). The BEA intends to continue to work with the Census Bureau to improve the quality and timeliness of the business and government surveys and to work with the Bureau of Labor Statistics (BLS) to provide quality-adjusted price indexes for high-tech goods and to expand coverage of high-tech services. The BEA also plans to conduct its own research toward developing quality-adjusted price indexes for
IP Equipment and Software in the National Accounts
365
selected IP components where data may be available to adjust for changing characteristics. In addition, the BEA plans to improve its IP equipment and software estimates—particularly the software component—in its inputoutput tables and in its national income and product and international transactions accounts. In order to facilitate research leading to improved measurement of information technology, this paper discusses the relationship between private fixed investment in IP equipment and software and GDP, explains how the current- and constant-dollar estimates are prepared, and finally assesses recent progress in measurement and plans for improvement. 10.2 IP Equipment and Software Investment and Movements in Real GDP Information-processing equipment and software investment played important roles in both the acceleration of real GDP during the 1990s and its slowing in 2000–2001. The acceleration of real GDP began in late 1995. As shown in table 10.2, the average rate of growth of real GDP increased from 2.4 percent in 1991–95 to 4.0 percent in 1996–2000. In 1996–2000, nearly one-fourth of the increase in the average rate of growth of real GDP was accounted for by IP equipment and software investment, and another fourth was accounted for by all other private fixed investment. Information-processing equipment and software investment contributed 0.72 percentage point to the average growth rate of real GDP in 1996–2000, slightly more than the contribution of all other private fixed investment.1 Real GDP slowed during 2000; it slowed from 3.7 percent in the first half of the year to 0.9 percent in the second half. In the first half of 2001, real GDP growth declined 1.1 percent as the economy slipped into a recession. Real GDP then increased 1.2 percent in the second half, as a decrease of 0.3 percent in the third quarter was slightly more than offset by an increase of 2.7 percent in the fourth quarter. As shown in table 10.3, this short-run pattern was largely the result of declining, then negative, contributions to changes in real GDP from both private fixed investment and change in private inventories. The contributions of change in private inventories were negative in every quarter of 2000 and 2001, with the exception of 2000Q2. The contributions of IP equipment and software investment declined after 2000Q1 and were negative in all four quarters of 2001. The contributions of all other private fixed investment declined sharply in the first half of 2000 and thereafter were negative in all quarters except 2001Q1. Thus, the contributions of IP equipment and software investment played a large role in the declining trend of real GDP after the beginning of 2000. 1. A 0.38 percentage point contribution of IP equipment and software investment in 1991– 95 was in line with its gradually increasing contributions over the post-WWII era, which ranged from 0.06 percentage point in 1951–60 to 0.33 percentage point in 1981–90. The doubling of its contribution in 1996–2000 was a substantial deviation from its historical trend.
366
Bruce T. Grimm, Brent R. Moulton, and David B. Wasshausen
Table 10.2
Contributions to average percent changes in real gross domestic product 1991–95
1996–2000
Change
2.4
4.0
1.6
0.62 0.38 0.24 0.06 1.70
1.40 0.72 0.68 0.10 2.50
0.78 0.34 0.44 0.04 0.80
Percent change at annual rate GDP Percentage points at annual rates Private fixed investment Information processing equipment and software Other private fixed investment Change in private inventories Other GDP components, net Source: Derived from NIPA table 8.2.
Table 10.3
Contributions to percent change in real gross domestic product 2000
Percent change at annual rate GDP Percentage points at annual rates Private fixed investment Information processing equipment and software Other private fixed investment Change in private inventories Other GDP components, net
2001
Q1
Q2
Q3
Q4
Q1
Q2
Q3
Q4
2.6
4.8
0.6
1.1
–0.6
–1.6
–0.3
2.7
2.15
1.15
0.04
–0.41
–0.38
–1.95
–0.72
–1.49
1.15 1.00 –1.77 2.22
0.78 0.37 1.77 1.88
0.20 –0.16 –1.12 1.68
0.08 –0.49 –0.14 1.65
–0.47 0.09 –3.27 3.05
–0.86 –1.09 –1.14 1.49
–0.48 –0.24 –0.09 0.51
–0.05 –1.44 –1.39 5.58
Source: Derived from NIPA table 8.2.
Figure 10.1 shows trends using three-quarter centered moving averages.2 The moving average of changes in real GDP fell from 4.8 percent in 2000Q1 to –0.4 percent in 2001Q1 and to –0.8 percent in 2001Q2. From 2000Q1 to 2001Q2 the declining contributions of IP equipment and software accounted for about one-fourth of the fall in the trend growth of real GDP; this was similar to its relative contribution to the acceleration in real GDP from the first to the second half of the 1990s, as well as the recovery in real GDP from 2001Q3 and 2001Q4. 10.3 Estimating Private Fixed Investment In addition to the usual challenges associated with measuring real output, measuring real output in IP equipment and software presents some 2. A three-quarter centered moving average is used to describe trends because it acts to smooth erratic quarter-to-quarter movements in real GDP.
IP Equipment and Software in the National Accounts
367
Fig. 10.1 Real GDP and the contributions of private fixed investment in information-processing (IP) equipment and software Notes: Three-quarter moving average, percent change at seasonally adjusted annual rate.
additional challenges because new products are constantly developed and introduced into this category and because existing product characteristics in this category tend to change more rapidly than product characteristics in other categories. These additional challenges presented by new and changing products include the following:
• Benchmark extrapolators. The most recent benchmark input-output (I-O) tables are for 1992–a year in which some of the products presently included in IP equipment and software did not exist in their present form. Nonbenchmark year estimates reflect extrapolations, where the extrapolators must be flexible enough to reflect the current year’s basket of goods and at the same time fit the description of an existing benchmark year component. • Source data. Naturally, new products present problems for BEA’s source data agencies. For example, when a new or significantly modified product is introduced into the BLS producer price index (PPI), an appropriate link must be formed. Similarly, when a manufacturer starts shipping a new product, the Census Bureau must determine exactly where to classify the new product. Often with the introduction of new products, survey questionnaires need to be modified. • Product knowledge. It is important for the statistical agencies’ analysts to understand the products being measured. As more and more of these products are significantly changed or introduced, it becomes more and more difficult for the analysts to stay current.
368
Bruce T. Grimm, Brent R. Moulton, and David B. Wasshausen
Fig. 10.2
Commodity flow
10.3.1 Current-Dollar Estimates Information-processing equipment and software investment, excluding own-account software, is estimated in current prices primarily by the “commodity-flow” methodology, with periodic benchmarking to the quinquennial I-O tables. The commodity-flow method is a “supply-side” approach, which traces commodities from their domestic production or importation to their final purchase. (Figure 10.2 illustrates the commodityflow method.) The strength of the commodity-flow method is that it draws on the very detailed commodity classification and comprehensive coverage of the economic censuses, as well as on the conceptual rigor of an I-O table in which production and uses of commodities are reconciled for benchmark years. It provides detailed information on the commodity composition of investment, but it does not yield information on investment by industry or by class of purchaser. An alternative estimation method that is used by many countries is a “demand-side” approach, which bases estimates on capital expenditure data collected from purchasers, such as the U.S. Annual Capital Expenditures Survey (ACES). For the BEA, a supply-side approach is preferable to a demand-side approach for two reasons. First, the estimate begins with the most reliable available information—domestic and import supply—which is then assigned to specific types of expenditures (i.e., intermediate expenditures, private investment expenditures, consumer expenditures, exports, and government expenditures). In contrast, source data for demand-side measures are generally less comprehensive, especially for some IP components
IP Equipment and Software in the National Accounts Table 10.4
369
Bureau of Economic Analysis national income and product accounts (NIPAs) private fixed investment versus Bureau of the Census Annual Capital Expenditure Survey (ACES) capital expenditures (billions of dollars, unless otherwise specified)
NIPA Private fixed investment in new equipment and software Percent change ACES Capital expenditures for new equipment and softwarea Percent change NIPA less ACES Difference in level Difference in percent change
Line
1993
1994
1995
1996
1997
1998
1999
2000
1 2
538.8
599.7 11.3
663.7 10.7
717.7 8.1
786.3 9.6
861.7 9.6
932.6 8.2
992.7 6.4
3 4
319.7
358.5 12.2
454.9 26.9
526.0 15.6
562.0 6.8
606.2 7.9
689.6 13.7
758.0 9.9
5 6
219.1
241.2 –0.9
208.8 –16.2
191.7 –7.5
224.3 2.7
255.5 1.7
243.0 –5.5
234.7 –3.5
Addenda NIPA private fixed investment in new equipment Percent change
7 8
469.4
524.2 11.7
580.2 10.7
622.6 7.3
669.8 7.6
721.7 7.7
770.1 6.7
813.3 5.6
NIPA private fixed investment in new equipment less ACES capital expenditures for new equipment and software Percent change
9 10
149.7
165.7 –0.5
125.2 –16.2
96.5 –8.3
107.8 0.8
115.4 –0.1
80.6 –7.0
55.3 –4.3
a
Although the series conceptually includes capitalized software, the census does not include the word “software” in the title. Also, estimates for years 1993 and 1994 are only for companies with five or more employees.
such as software. Second, the supply-side approach yields additional detail by type of asset that is generally not available from capital expenditure surveys. Typically, ACES provides annual estimates for capital expenditures by industry, but not by type of investment. Capital expenditures by type are published every five years—the latest year available at the time of this writing was 1998—and they provide only a limited amount of information by type of asset. In the NIPAs, the supply-side approach is used to estimate total investment and investment by type of asset, and then ACES is used along with other information to allocate investment by industry. For many products, the two approaches yield similar results; however, there can be considerable differences. Table 10.4 presents a comparison of new capital expenditure estimates for total equipment and software.3 The capital expenditure estimates prepared by the Census Bureau (line 3) using 3. The estimates presented in table 10.4 are for new equipment only because the definition of used equipment and software is conceptually different between NIPA private fixed investment and ACES. Detailed comparisons shown in table 10.5, however, are for both new and used equipment and software because detailed estimates for private fixed investment in only new equipment and software are not published separately in the NIPAs.
370 Table 10.5
Bruce T. Grimm, Brent R. Moulton, and David B. Wasshausen NIPA private fixed investment versus ACES capital expenditures by type, 1998 (billions of dollars) Line
NIPA
ACES
Difference
1 2 3 4 5 6 7
363.4 84.2 140.1 92.2 47.9 81.2 36.3
183.6 82.5 11.8 11.8
179.8 1.7 128.3 80.4
59.5 19.7
21.7 16.6
8 9 10
21.7 13.7 8.0
10.1
11.6
Addendum Information-processing equipment and software less software 11
223.3
171.8
51.5
Information-processing equipment and software Computers and peripheral equipment Software Prepackaged and customa Own-accountb Communication equipment Instruments Office, accounting, and photocopy and related equipment Photocopy and related equipment Office and accounting equipment
a
A significant source of difference between the estimates stems from the fact that businesses typically did not treat software as a capital expenditure in 1998. b The annual capital expenditure survey will recognize own-account software investment beginning with estimates for 2001.
a demand-side approach are consistently lower than the corresponding estimates prepared by BEA (line 1) based on the commodity flow. However, there is no clear trend when comparing growth rates of the two series (line 6).4 The addenda present NIPA private fixed investment in equipment and compares this series with ACES capital expenditures for equipment and software. This comparison may be more appropriate because there is anecdotal evidence that most businesses do not treat software as a capital expenditure and therefore it is likely that capital expenditures for software are significantly understated in the ACES estimates. However, once again there is no clear trend when comparing growth rates of the two series (line 10). Table 10.5 presents a detailed comparison of estimates prepared by the BEA and the Census Bureau for capital investment in IP equipment and software for 1998—the most recent year for which detailed ACES capital expenditures are available by type. There are several reasons, aside from the obvious, why the two sets of estimates may differ:
• The ACES estimates for capital expenditures by type reflect only expenditures by companies with employees, while NIPA private fixed investment estimates reflect capital expenditures by all companies and nonprofit institutions. 4. The large increase in 1995 in the ACES series (line 4) reflects the fact that prior to 1995, published estimates were only available for companies with five or more employees.
IP Equipment and Software in the National Accounts
371
• The NIPA private fixed investment estimates in nonbenchmark years reflect certain assumptions based on the most recent benchmark year. For example, allocation shares of total supply to the appropriate expenditure category (intermediate, business, household, or government) and factors used to convert from producer values to purchaser values are typically unchanged. The ACES estimates do not rely on such assumptions. • The American Institute of Certified Public Accountants issued a statement of position in March of 1998 (no. 98-1, Accounting for the Costs of Computer Software Developed or Obtained for Internal Use), stipulating that software with a service life of more than one year should be treated as capital investment, effective for financial statements for fiscal years beginning after December 15, 1998. It is likely that a greater share of ACES respondents reported capitalized software in subsequent years. (The next year for which ACES estimates separately identify and publish capital expenditures for software is 2001. These estimates were not available at the time this paper was written.) • Definitions and general instructions for the 1998 ACES act to exclude computer software if considered intangible.5 Anecdotal evidence suggests that businesses failed to report many purchases of software as investment in the 1998 ACES. • Internal Revenue Service (IRS) regulations allow for low-value items (under $17,500 for 1998) that fit the criteria for capital investment to be expensed. It is possible that much software falls into this category and that ACES respondents follow IRS guidelines when determining what is a capital investment. For example, in the 1998 ACES, U.S. companies reported expenditures of $11.8 billion on capitalized software purchased separately (see table 10.5, line 4). In contrast, the 1999 Census Bureau’s Service Annual Survey (SAS) reported sales for 1998 by the prepackaged software industry—that is, software publishing—of more than $70 billion, and sales by the custom software industry—that is, computer programming services—of more than $50 billion. The BEA’s commodity-flow methodology produced an estimate of business investment in these two types of software of somewhat more than $90 billion, more than seven times as much as reported by business in the ACES. While this comparison of software estimates is not typical of most products, it does demonstrate the potential differences between the two approaches. The commodity-flow method of estimating equipment is implemented in its most complete form for estimates in the I-O tables for benchmark years. For nonbenchmark years, the commodity-flow method is abbreviated 5. The definitions and general instructions for ACES for subsequent years act to include capitalized computer software.
372
Bruce T. Grimm, Brent R. Moulton, and David B. Wasshausen
Table 10.6
Commodity flow example for computers (millions of dollars, seasonally adjusted at quarterly rates)
Commodity flow
Line
2001Q1
2001Q2
% Change
Manufacturers’ industry shipments of computers and related products – Adjustment to convert from industry shipments to shipments of PES products Product shipments, producer valuea
1
32,293
28,051
–13.1
2 3
5,765 26,528
4,928 23,123
–14.5 –12.8
– Export supply, producer valuea – Change in trade inventoriesb
4 5
4,670 0
4,367 0
–6.5
Domestic supply, producer value – Intermediate, household, and government purchases Trade and transportation margins Domestic supply to PFI, purchaser value
6
21,858
18,756
–14.2
7 8 9
8,786 2,875 15,948
7,556 2,457 13,657
–14.0 –14.6 –14.4
10
9,912
9,026
–8.9
11 12 13
3,985 1,383 7,309
3,658 1,249 6,617
–8.2 –9.6 –9.5
Total PES extrapolator for computers and peripheral equipment (sum of lines 9 and 13)
14
23,257
20,274
–12.8
Apply percent change from PES extrapolator (line 14) to calculate published estimate (line 15): Total PES computers and peripheral equipment (billions of dollars, annual rate)
15
Import supply, producer valuea – Intermediate, household, and government purchases Trade and transportation margins Import supply to PFI, purchaser value
102.9
89.6
–12.8
Notes: PES private fixed investment in equipment and software; PFI private fixed investment. a Excludes products considered wholly intermediate. b For quarterly estimates, change in inventories is assumed to be zero.
to utilize the data that are available for the annual NIPA estimates. A further abbreviation of the commodity-flow method is used for current quarterly estimates. An illustrative example using the estimate of private fixed investment in computers and peripheral equipment for 2001Q2 is shown in table 10.6. A step-by-step explanation of table 10.6 follows:
• Manufacturers’ industry shipments of computers and related products (line 1) are from the Census Bureau’s monthly publication of Manufacturers’ Shipments, Inventories, and Orders (M3). The adjustment to convert M3 industry shipments to private equipment and software (PES) product shipments (line 2) is derived by comparing corresponding M3 industry shipments to the most recent year’s product shipments from the Census Bureau’s Annual Survey of Manufacturers (ASM). The difference is product shipments in producer value (line
IP Equipment and Software in the National Accounts
•
•
•
•
• •
•
373
3). “Producer value” (as opposed to “purchaser value”) indicates that the shipments are valued at the plant and do not reflect trade or transportation margins. Next, “export supply” in producer value (line 4) is subtracted from the product shipments yielding domestic supply, still in producer value (line 6). Exports are derived from the Census Bureau’s monthly Foreign Trade (CFT) statistics. The CFT exports are adjusted slightly for coverage (e.g., NIPA territorial adjustment).6 No attempt is made to estimate quarterly inventory change by type of commodity, although annual estimates are made for selected commodities. Accordingly, change in inventories for computers and peripheral equipment is assumed to be zero (line 5). Intermediate, household, and government purchases (line 7) are subtracted from domestic supply, producer value. These purchases are derived from detailed benchmark I-O estimates, the most recent annual estimate for personal consumption expenditures for computers, and the most recent annual estimate for government purchases of computers and peripheral equipment.7 Next, trade and transportation margins (line 8) are added in order to convert the domestic supply to private fixed investment from a producer value to a purchaser value (line 9). The trade and transportation margins are derived from detailed benchmark I-O estimates. “Import supply” (line 10) is derived from the monthly CFT statistics and, like export supply, is adjusted slightly for coverage. Intermediate, household, and government purchases (line 11) are subtracted, and the trade and transportation margins (line 12) are added. The result is import supply to private fixed investment in purchaser value (line 13). The sum of domestic supply to private fixed investment (line 9) and import supply to private fixed investment (line 13) is the total PES extrapolator (line 14). Published PES computers and peripheral equipment (line 15) are derived using the percent change in the extrapolator.8
The methodology described in this example can be applied to all other components of IP equipment with the exception of software. Currentquarter estimates for private fixed investment in prepackaged and custom software are extrapolated directly using data on receipts from company reports to the Securities and Exchange Commission and data on monthly 6. The treatment of U.S. territories, Puerto Rico, and the Northern Mariana Islands in the NIPAs differs from that in the international transactions accounts (ITAs). In the NIPAs, they are included in the rest of the world; in the ITAs, they are treated as part of the United States. 7. For more information, see BEA (1998). 8. Published estimates for private fixed investment in computers and peripheral equipment can also be found in NIPA table 5.4, line 10.
374
Bruce T. Grimm, Brent R. Moulton, and David B. Wasshausen
retail sales of business software from a trade source. Current-quarter estimates for own-account software are extrapolated using a lagged threequarter moving average of the indicator for private fixed investment in computers and peripheral equipment. While it is true that some of the data (i.e., trade and transportation margins, and information used to allocate supply to the various expenditure categories) are typically unchanged from the most recent benchmark I-O table and are therefore subject to scrutiny, the percent changes in the private fixed investment estimates are consistent with the percent changes in the underlying current-period source data (i.e., manufacturers’ shipments, exports, and imports). It is important to note that the underlying methodologies for preparing quarterly NIPA estimates are aimed at accurately measuring NIPA aggregates. For annual estimates, the ASM is used instead of the M3 to prepare the PES product shipments. Purchased software is an exception—here the BEA uses industry receipts from the SAS to estimate prepackaged and custom software sales. The commodity-flow procedure for annual estimates of purchased software is consistent with the estimates for equipment. The commodity-flow method is not used in the annual estimation of own-account software investment. Own-account software investment is measured as the sum of production costs, which include employee compensation—both wage and nonwage—and the costs of intermediate inputs. Own-account software estimates are based on the numbers of programmers and computer systems analysts engaged in the production of nonembedded software or software produced for sale. These numbers are calculated from the total number of programmers and computer systems analysts where the effects of embedded software or software produced for sale are accounted for by limiting the maximum shares of employment in one- (or two-) digit-SIC-level industries to a maximum of 0.2 percent of total employment in each industry; the limits affect own-account software investment in mining, durable and nondurable goods manufacturing, and business services. (Numbers of programmers and systems analysts in excess of these limits are assumed to be engaged in the production of software for sale or in the production of software that is embedded in or bundled with other products of these industries.) The adjusted estimates are then multiplied by a factor of 0.5 to account for the share of programmers’ and computer systems analysts’ time that is estimated to be spent doing tasks associated with new investment rather than such activities as minor revisions and upgrades and maintenance. Together, the 0.5 factor and the limiting factor reduce business investment in own-account software to roughly one-quarter of what it would be if they were not included in the calculations. The same 0.5 factor is used for government programmers and computer systems analysts, but the limiting factor is not used. The adjusted numbers of programmers and computer systems analysts
IP Equipment and Software in the National Accounts
375
are multiplied by national median wage rates for these occupations as well as by factors that translate from wages to total compensation at the one- or two-digit industry level, and summed to get totals for all business and for federal and for state and local governments. The three compensation estimates are then blown up by factors derived from the 1987 and 1992 benchmark I-O tables to obtain total costs—which include intermediate inputs such as supplies, depreciation of physical capital, and management and support costs—to obtain own-account software investment for business and for federal, state, and local governments.9 The own-account software estimates are thus based on numbers of programmers and computer systems analysts, which are converted to currentdollar estimates by a series of sequential computations. The price indexes used to deflate own-account software are calculated from a weighted average of indexes of compensation for programmers and computer systems analysts and of the intermediate inputs associated with their work. Compensation indexes are estimated separately for business and for government own-account investment. With the exception of the industry-level ratios that convert wage costs to compensation costs, the data used are from various BLS sources. The compensation ratios are based on industry-level data from NIPA tables 6.2 and 6.3. For years after the most recent BLS occupation survey, business ownaccount software investment is extrapolated using NIPA estimates of current-dollar private fixed capital formation in computers and peripheral equipment. This extrapolation is needed because the BLS employment and wage rate estimates are available with a lag of at least two years. The ratio of own-account software to this capital formation is held constant at its 1998 value; because this ratio is for current-dollar values, it is unaffected by the tendency for computer prices to decline rapidly. 10.3.2 Real Estimates and Price Indexes Changes in current-dollar private fixed investment in IP equipment and software reflect market prices in each period. For many purposes, it is necessary to decompose these changes into quantity changes and price changes. The changes in quantities and prices are calculated using a Fisher formula that incorporates weights from two adjacent periods. These annual changes are “chained” (multiplied) together to form time series of quantity and price indexes.10 Real estimates, or quantities, can be expressed as index numbers or as “chained dollars.” At present, the reference year is 1996 and therefore the quantity indexes equal 100 in 1996. The chain-dollar 9. See Parker and Grimm (2000) for greater detail about the calculation of own-account software, including the adjustments and limiting factor. 10. For more information, see BEA (2001).
376
Bruce T. Grimm, Brent R. Moulton, and David B. Wasshausen
expression for quantities is essentially an index; however, it is based to the current-dollar value of the reference year. Accordingly, the chained (1996) dollar estimates for 1996 equal the current-dollar estimates for 1996, and other periods’ values can be computed by multiplying the 1996 currentdollar values by the corresponding quantity index numbers divided by 100. Detailed quantity estimates for private fixed investment in IP equipment and software are derived by deflation. That is, detailed current-dollar values are divided by detailed matching price indexes. For the majority of IP equipment and software, the PPIs are the foundation for the price deflator. Detailed Price Indexes Computers and Peripheral Equipment. Computers and peripheral equipment consist of eleven components for both annual and quarterly estimates. For recent periods, the price indexes used to deflate computers and peripheral equipment are derived from BLS PPIs and import price indexes (IPIs). These PPIs and IPIs are quality adjusted by BLS using hedonic techniques. Prior to the BLS’s implementation of quality-adjusted computer prices using hedonic techniques, the BEA estimated its own set of detailed quality-adjusted computer price indexes. While the BEA methods also used hedonic techniques to quality adjust, the two approaches were quite different.11 Table 10.7 presents the detailed deflators used to construct real private fixed investment in computers and peripheral equipment. Software. Software consists of three components, shown in table 10.8, for both annual and quarterly estimates. For recent periods, the price indexes used to deflate software are derived from PPIs, a BEA cost index, and a BLS employment cost index.12 Table 10.8 presents the detailed deflators used to construct real private fixed investment in software. Communication Equipment. Within communication equipment, twelve components (eight domestic, four import) accounted for 98 percent of investment for the 1999 annual estimates, and two components (domestic total and import total) are used for the quarterly estimates.13 Ten different price indexes (eight domestic, two import) are used to deflate these annual components. For the quarterly estimates, detailed quarterly indexes corresponding to the annual components are weighted together using currentdollar shares from the most recent year available. Table 10.9 presents the detailed deflators used to construct real private fixed investment in communication equipment. (Note that component products were defined in the 1992 I-O table; however, current-year extrapolators reflect goods and 11. For more information on the BLS computer price indexes, see Holdway (2001). For more information on the BEA computer price indexes, see Wasshausen (2002). 12. For more information, see Parker and Grimm (2000). 13. Components with current-dollar shares of less than 1 percent are not shown.
IP Equipment and Software in the National Accounts Table 10.7
377
Computers and peripheral equipment, 1999 Current Dollar Share
Component Domestically produced Computers, excluding PCs
0.12
PCs
0.29
Storage devices Computer terminals Peripheral equipment, NEC
0.08 0.00 0.12
Systems integrators
0.09
Imported Computers, excluding PCs PCs Storage devices Computer terminals
0.02 0.06 0.06 0.08
Peripheral equipment, NEC
0.08
Deflator
Weighted average of PPI for large-scale computers and PPI for mid-range computers Weighted average of PPI for PCs and workstations and PPI for portable computers PPI for computer storage devices PPI for computer terminals PPI for computer peripheral equipment, NEC BEA aggregate computer price index IPI for computers IPI for computers IPI for computer storage devices IPI for computer displays, including monitors and terminals IPI for computer printers
Notes: PCs personal computers; PPI producer price index; IPI import price index; NEC not elsewhere classified.
Table 10.8
Software, 1999
Component
Current dollar share
Prepackaged
0.340
Own-account
0.333
Custom
0.327
Deflator PPI for prepackaged software applications with a –3.15 percent per annum bias adjustment BEA input cost index consisting of compensation cost indexes and an intermediate inputs cost index BEA price/cost index reflecting weighted average of prepackaged and own-account percent changes
Notes: BEA Bureau of Economic Analysis; PPI producer price index.
services produced in the current year that may not have existed in 1992. For example, routers, switches and hubs are included in the extrapolator for “Telephone and telegraph wire apparatus.”) Instruments. Within instruments, twenty-one components (sixteen domestic, five import) accounted for 98 percent of investment for the 1999 annual estimates, and two components (domestic total and import total) are used for the quarterly estimates. Fifteen different price indexes (twelve domes-
378 Table 10.9
Bruce T. Grimm, Brent R. Moulton, and David B. Wasshausen Communication equipment, 1999
Component Domestically produced Telephone and telegraph wire apparatus
Current dollar share
0.20
Telephone switching and switchboard equipment
0.17
Communication equipment, excl. broadcast
0.14
Search, detection, and navigation equipment Force account, communication equipment installation Industrial process design
0.10
Laser systems, excl. communication
0.01
Broadcast related equipment
0.01
Othera
0.01
Imported Telephone and telegraph wire apparatus Telephone switching and switchboard equipment Broadcast related equipment Communication equipment, excl. broadcast Othera
0.10 0.05
0.06 0.06
Deflator
BEA price index reflecting weighted average of FRB LAN price index and PPI for telephone and telegraph apparatus PPI for telephone switching and switchboard equipment with a –5.7 percent per annum bias adjustment PPI for communication equipment (except wire telephone and broadcast, cable or studio equipment) PPI for Search, detection, navigation and guidance systems and equipment BEA cost index derived from average weekly earnings for electrical workers PPI for engineering design, analysis, and consulting services PPI for laser systems and equipment (excl. communication, medical and surveying types) PPI for broadcast, cable, studio, and related communication equipment Detailed PPIs
0.05 0.04
IPI for telecommunications equipment PPI for telephone and telegraph apparatus with a –5.7 percent per annum bias adjustment IPI for telecommunications equipment IPI for telecommunications equipment
0.01
Detailed IPIs
Notes: BEA Bureau of Economic Analysis; IPI import price index; PPI producer price index. a This component comprises several low-value items that are deflated separately with the appropriate PPI or IPI.
tic, three import) are used to deflate these annual components. As with quarterly estimates for communications equipment, detailed quarterly indexes corresponding to the annual components are weighted together using current-dollar shares from the most recent year available. Table 10.10 presents the detailed deflators used to construct real private fixed investment in instruments. Photocopy and Related Equipment. Within photocopy and related equipment, thirteen components (nine domestic, four import) accounted for nearly all of investment for the annual estimates of 1999, and two components
Table 10.10
Instruments, 1999
Component Domestically produced Surgical and medical instruments and apparatus Analytical and scientific instruments Industrial process design
Current dollar share
0.24 0.10 0.06
Laboratory and scientific apparatus Process control instruments Physical properties testing and inspection equipment Surgical appliances and supplies
0.06 0.05 0.05
Surgical and medical and instruments NSK Integrating and totalizing meters for gas and liquids Dental professional equipment and supplies Undistributed process control instruments Nuclear radiation detection and monitoring instruments Hospital furniture
0.04
0.04
0.03 0.03 0.02 0.01 0.01
Dental laboratory equipment and supplies
0.01
Undistributed measuring and controlling devices Commercial, geophysical, general purpose instruments Othera
0.01
Imported Surgical and medical instruments and apparatus Analytical and scientific instruments Process control instruments Surgical appliances and supplies Dental professional equipment and supplies Othera
0.01
Deflator
PPI for surgical and medical instruments and apparatus, except furniture PPI for laboratory analytical instruments PPI for engineering design, analysis, and consulting services PPI for laboratory apparatus PPI for industrial process control instruments PPI for physical properties and kinematic testing equipment PPI for surgical, orthopedic, and prosthetic appliances and supplies except surgical dressings PPI for surgical and medical instruments and apparatus PPI for integrating and totalizing meters for gas and liquids PPI for dental professional equipment, incl. dental chairs, units, hand pieces, excl. X-ray PPI for industrial process control instruments PPI for commercial, geophysical, meteorological and general purpose instruments PPI for surgical and medical instruments and apparatus PPI for dental professional equipment, incl. dental chairs, units, hand pieces, and excl. X-ray PPI for measuring and controlling devices
0.01
PPI for commercial, geophysical, meteorological, and general purpose instruments Detailed PPIs
0.11
IPI for scientific and medical machinery
0.05 0.04 0.01 0.01
IPI for recreational equipment and materials IPI for measuring, testing, and control IPI for scientific and medical machinery IPI for scientific and medical machinery
0.01
Detailed IPIs
Note: IPI import price index; NEC not elsewhere classified; PPI producer price index. a This component comprises several low-value items that are deflated separately with the appropriate PPI or IPI.
380
Bruce T. Grimm, Brent R. Moulton, and David B. Wasshausen
(domestic total and import total) are used for the quarterly estimates. Nine different price indexes (eight domestic, one import) are used to deflate these annual components. Detailed quarterly indexes corresponding to the annual components are weighted together using current-dollar shares from the most recent year available. Table 10.11 presents the detailed deflators used to construct real private fixed investment in photocopy and related equipment. Office and Accounting Equipment. Within office and accounting equipment, thirteen components (eight domestic, five import) accounted for nearly all of investment for the annual estimates of 1999, and two components (domestic total and import total) are used for the quarterly estimates. Six different price indexes (five domestic, one import) are used to deflate these annual components. Detailed quarterly indexes corresponding to the annual components are weighted together using current-dollar shares from the most recent year available. Table 10.12 presents the detailed deflators used to construct real private fixed investment in office and accounting equipment.
Table 10.11
Photocopy and related equipment, 1999
Component
Current dollar share
Domestically produced Photocopying equipment
0.17
Engineering services
0.14
Optical instruments and lenses, NES Still picture equipment Photocopy equipment: miscellaneous receipts Motion picture equipment
0.07 0.03 0.01
Used photocopy equipment Optical instruments and lenses, NSK Microfilming, blueprinting, and whiteprinting equipment
0.01 0.01 0.01
Imported Photocopying equipment Still picture equipment Optical instruments and lenses, NES Motion picture equipment
0.01
0.20 0.19 0.14 0.02
Deflator
PPI for photocopying equipment (incl. diffusion and dye transfers, electrostatic, etc.) PPI for engineering design, analysis, and consulting services PPI for optical instruments and lenses PPI for still picture equipment PPI for photographic equipment and supplies PPI for motion picture equipment and projection screens PPI for photographic equipment and supplies PPI for laboratory analytical instruments PPI for microfilming, blueprinting, and whiteprinting equipment IPI for recreational equipment and materials IPI for recreational equipment and materials IPI for recreational equipment and materials IPI for recreational equipment and materials
Notes: IPI import price index; NES not elsewhere specified; NSK not specified by kind; PPI producer price index.
IP Equipment and Software in the National Accounts Table 10.12
381
Office and accounting equipment, 1999
Component
Current dollar share
Domestically produced Accounting machines and cash registers Used computer hardware, software Industrial process design
0.19 0.14 0.10
Mailing, letter handling, and addressing
0.09
Standard typewriters and office machines, NEC Scales and balances except laboratory
0.06
Office machines, NEC, NSK Duplicating
0.04 0.01
Imported Accounting machines and cash registers
0.22
Scales and balances except laboratory
0.04
Standard typewriters and office machines, NEC Mailing, letter handling, and addressing
0.03
Duplicating
0.01
0.05
0.01
Deflator
PPI for calculating and accounting machines PPI for calculating and accounting machines PPI for engineering design, analysis, and consulting services PPI for mailing, letter handling, and addressing machines, except parts and attachments PPI for office machines, NEC PPI for parts, attachments, and accessories for scales and balances PPI for office machines, NEC PPI for office machines, NEC IPI for business machinery and equipment, except computers IPI for business machinery and equipment, except computers IPI for business machinery and equipment, except computers IPI for business machinery and equipment, except computers IPI for business machinery and equipment, except computers
Note: IPI import price index; NEC not elsewhere classified; NSK not specified by kind; PPI producer price index.
Inventory of Hedonic Price Indexes Hedonic methods are sometimes used to quality adjust price indexes that are used to deflate several of the components of IP equipment and software:14
• Computers and peripheral equipment. All the detailed price indexes used to deflate computers and peripheral equipment employ hedonic methods for quality adjustment. As table 10.7 indicates, the BLS’s PPIs and IPIs are used to construct the price indexes used to deflate private fixed investment in computers and peripheral equipment. In the PPIs for computers and peripheral equipment, hedonic functions are used to estimate prices for specified characteristics (like speed). 14. For more information on the BEA’s use of hedonic quality adjustment, see Moulton (2001).
382
Bruce T. Grimm, Brent R. Moulton, and David B. Wasshausen
These estimated prices of specified characteristics are then used to quality adjust the price of a newly introduced model so that it is consistent with the discontinued model.15 The IPIs for computers and peripheral equipment use the estimated characteristics’ prices from the PPI to quality adjust models as needed. The BEA first introduced quality-adjusted price indexes for computers and peripheral equipment into the NIPAs with its eighth comprehensive revision, released in December 1985. At that time, the BEA worked with IBM to develop quality-adjusted price indexes for five types of computing equipment—computer processors, disk drives, printers, displays (terminals), and tape drives.16 Hedonic methods were used to estimate coefficients (prices) for various characteristics (speed, memory, etc.). Composite price indexes were then constructed using both reported model prices and, for models not sold in the base year, model prices imputed from the characteristics’ coefficients. As the BLS introduced hedonic PPIs starting in the early 1990s, the BEA switched to using these PPIs as deflators.17 • Software. In the eleventh comprehensive revision released in October 1999, software was first recognized as fixed investment. The price index for prepackaged software reflects hedonic methods for quality adjustment for the period 1985–93. For 1985–93, the quality-adjusted price index is estimated by combining the BEA-developed hedonic price indexes and the Oliner-Sichel matched-model indexes.18 The BEA developed hedonic price indexes for two types of prepackaged software—spreadsheets and word processing.19 These hedonic price indexes are estimated using a methodology that is an extension of earlier work on software prices by Brynjolfsson and Kemerer (1996) and by Gandal (1994). The price index estimates are based on regressions in which the logarithm of prices of prepackaged software is a linear function of selected quality characteristics and of dummy variables for each year of the price observations. The resulting indexes are “regression” price indexes in which the coefficients of the dummy variables for each year are used to construct price index values for the sample periods of the regressions.20 The individual hedonic price indexes for the two types of software are weighted together to produce a summary 15. For more information on BLS computer price indexes and an illustrative example, see Holdway (2001). 16. See Cartwright (1986), Cole et al. (1986), and Triplett (1986). 17. For more information on BEA computer price indexes, see Wasshausen (2002). 18. See Oliner and Sichel (1994). 19. The data on prices and quality characteristics used to estimate the regressions are obtained from published editions of National Software Testing Laboratories’ Ratings Reports. These data are available only through 1994. Hedonic estimates were also considered for database software, but the results were not adequate to support the estimation of a price index. 20. For more information, see Cole et al. (1986).
IP Equipment and Software in the National Accounts
383
hedonic price index for prepackaged software. For periods other than 1985–93, source data are not adequate to prepare hedonic indexes, but a bias adjustment is applied to the matched model indexes, reflecting part of the difference between the hedonic index and the matchedmodel index for 1985–93. • Communication equipment. Two of the detailed price indexes used in the deflation of communication equipment use hedonic methods for quality adjustment: Telephone switching equipment, and local area network (LAN) equipment. Price Index for Telephone Switching Equipment. In the July 1997 annual revision of the NIPAs, a BEA quality-adjusted price index for telephone switching equipment was adopted. This index covers the period 1985–96 and is based on a hedonic regression explaining the prices of digital telephone switches. Telephone switches have performed increasingly complex sets of operations over time. At their simplest, electromechanical switches—which were the best available technology until the early 1980s—performed essentially the same function that human telephone operators did previously: linking the calling telephone line to the called telephone line, and also providing a dial tone. Digital electronic telephone switches—which have supplanted the earlier electromechanical switches—perform many additional, computerlike functions. For example, they can take an incoming telephone analog voice input; convert it to digital signals; break it up into packets (this allows one line to handle more than one call at a time) that also include information about the call, including the calling number; send the packets to anywhere in the world, each by its own—most efficient—route; reassemble the packets into properly ordered digital signals; reconvert the call to analog voice outputs; and send it to the receiving telephone line. Switch operations are controlled by computer programs that are custom made for each switch. The BEA’s switch price index is based on publicly available data that were obtained from the Federal Communications Commission, which gathered the data from telephone operating companies to support ratesetting hearings. A hedonic regression for the prices of switches was estimated, using as explanatory variables the number of telephone lines of capacity of the switch, the type of switch and its manufacturer, the state that the switch was installed in (different states typically have very different ways of assembling switch networks—to be optimal under different calling densities, patterns, and distances—that affect the costs of the switches used as part of the networks), and the year that the switch was installed. The regression made the log of the switch price a function of the log of the number of lines, twenty-eight quality characteristics dummy variables, and thirteen year dummy variables. The data set included installed switches in
384
Bruce T. Grimm, Brent R. Moulton, and David B. Wasshausen
twenty states that contained fifty-five percent of the U.S. population and that were chosen to be representative of the various regions of the United States. The price index, which is a regression price index, was constructed using the coefficients of the year dummy variables and a smoothing algorithm of 0.6 P( y) 0.3 P( y – 1) 0.1 P( y – 2); the smoothing was used to reduce erratic year-to-year movements in the raw index. The index has an average annual rate of decline of 9.1 percent from 1985 to 1996, and the declines range from 4.0 percent in 1991 to 23.1 percent in 1995. The price index was not extended past 1996 because the Telecommunications Reform Act of 1996 removed a mandatory reporting requirement, and telephone operating companies stopped reporting. In any event, in the last half of the 1990s new and radically different switching technologies began to be adopted, and very different quality characteristics became important in determining the capabilities and prices of switches. Price Index for LAN Equipment. In the July 2001 annual revision of the NIPAs, a quality-adjusted price index for LAN equipment was adopted. This price index is published by the Federal Reserve Board (FRB) and was developed by Mark Doms and Christopher Forman (2001). A brief description of the price index and its methodology was published in the March 2001 Federal Reserve Bulletin. Doms and Forman found rapid rates of decline for prices for all of the types of LAN equipment that they examined. They broke LAN equipment down into four categories: Routers, switches, LAN cards, and hubs. They used hedonic regressions to estimate price changes for routers and switches and a matched model for LAN cards, and prices for hubs were judgmentally inferred from their economic relationship to switches. There is little question that LAN equipment has exhibited significant and rapid improvements in quality, making it a good candidate to be quality adjusted using hedonic techniques. The technical note in the March 2001 Federal Reserve Bulletin states, “in 1995 Ethernet switches operating at 10 megabits per second dominated the market; last year, the two most popular switches operated at rates of 100 megabits per second.” The functional form for both (routers and switches) equations was loglog, and the index was calculated from the coefficients of the year dummies. For routers, the regressions controlled for about twenty characteristics, including bandwidth capacity, number of ports available for network interface modules, and processor speed of router. For switches the approach was similar, with regressions primarily controlling for the number and types of ports and other capabilities.21 For the period 1995–99, their aggregate index for LAN equipment declines at an average rate of 18.0 per21. See Doms and Forman (2001).
IP Equipment and Software in the National Accounts
385
cent per annum, not greatly different from the 22.7 percent per annum rate of decline for the NIPA price index for computers and peripherals over this time period. Hedonic Estimates versus Other Estimates of Quality-Adjusted Prices There is increasing evidence that carefully constructed hedonic price indexes may differ little from some types of traditional matched-model price indexes.22 Aizcorbe, Corrado, and Doms (2000) estimated price indexes for desktop personal computers and notebook computers in the period 1994– 98 using both hedonic regressions and Fisher chain-weighted matchedmodel price estimates. They obtained very similar average rates of decline: weighted average annual rates of decline for the two types of computers were 29.1 percent for the matched-model estimates and 29.8 percent for the hedonic estimates. They also found that matched-model and hedonic price indexes yielded very similar estimates for average annual rates of decline for prices of Intel microprocessors in the period 1993–99: 56.3 percent for the matched-model estimates and 57.0 percent for the hedonic estimates. Similarly, BLS studies found that replacing its matched-model estimates with hedonic estimates only slightly raised the rate of price increase for videocassette recorders (VCRs) and slightly lowered it for televisions. See Moulton, LaFleur, and Moses (1999) and Liegey and Shepler (1999). As part of its work to develop price indexes for semiconductors, the BEA estimated hedonic price indexes—with a regression in log-log form—that explained prices of Intel microprocessors as functions of a number of quality characteristics and year dummy variables. The primary use of the hedonic equation was to fill in missing price observations where quantity data were available but not prices; the hedonic estimates were used for about one-third of the price observations. The augmented set of price observations was used with the quantity observations to construct a Fisher chainweighted matched-model price index. Over the period 1985–94, the matched-model price index had a somewhat more rapid average rate of decline than did a hedonic regression price index, 27.4 percent versus 22.0 percent; see Grimm (1998). Similar results were obtained for Motorola microprocessors. As part of the work to develop its quality-adjusted index for digital telephone switches, the BEA constructed alternative price indexes based on the average cost per installed telephone line of capacity for two common types of switch—AT&T’s 5ESS switch and Northern Telecom’s DMS100 switch. For the 1985–95 period, these indexes declined at average rates of 9.0 and 9.1 percent per annum, about the same as the average rate of decline of 9.1 percent for the hedonic price index. Even the year-to-year patterns are roughly similar; for example, the 5ESS price-per-line price index 22. See Landefeld and Grimm (2000).
386
Bruce T. Grimm, Brent R. Moulton, and David B. Wasshausen
Fig. 10.3
Telephone switch prices (1992 100)
has had declines that are generally close to those for the hedonic index; in contrast, the previous price index for telephone switches increased slightly over the period (figure 10.3). Thus, when matched-model and hedonic price index estimates using the same data sets are compared, the results are often similar, and hedonic estimates do not always yield greater rates of decline or lower rates of increase. Dulberger has suggested that, at least with regard to computer chips, the differences in rates of decline between some quality-adjusted price index estimates and the PPI estimates may stem from price patterns that combine with rapid early rates of decline for new models with lags in adding the new models into the PPI sample; see Dulberger (1993). 10.4 Recent Progress and Plans for Improvement 10.4.1 Recent Improvements 1997 Benchmark I-O Table Improvements in the estimates of purchased software include the incorporation of greater detail and more complete information from the 1997 economic censuses. Estimates for own-account software now reflect both finer levels of detail in calculations and the incorporation of newly available data from the BLS and the Census Bureau that both support the finerlevel calculations and allow more direct estimation of the costs of production based on wage costs.23 In addition, estimates for own-account 23. For more information on BEA plans to improve software, see Moylan (2001).
IP Equipment and Software in the National Accounts
387
software now include, for the first time, the capitalization of software originals used for reproduction. Overall, these improvements will result in a downward revision in 1997 to NIPA private fixed investment in software of approximately $18.5 billion. Improved Estimates of Intermediate Consumption of Purchased Software. A weakness in the estimation of fixed investment in software has been the measurement of intermediate consumption. Recent economic censuses, which are the source of the intermediate consumption estimate, did not collect adequate information on purchases of software by manufacturers. In addition, when the 1992 benchmark I-O table was completed, software was treated as intermediate consumption, not as investment. The BEA did not make any supplementary adjustments to the 1992 census to account for intermediate software purchases by manufacturers; the reported census data were used. Consequently, intermediate purchases of software may have been underestimated. For the 1997 I-O estimates, software was treated as investment and adjustments were made to supplement economic census data on intermediate software purchases by computer (and possibly other) manufacturers. New sources were used to derive estimates of purchased software embedded in or bundled with other equipment. For example, one source is annual detailed company revenue reports. At least one large software manufacturer reports receipts from original equipment manufacturer (OEM) software in its annual report. These receipts provide information on software embedded in other equipment. These OEM receipts were used along with industry experts’ estimates to calculate intermediate purchases. Expanded Definition of Exports and Imports of Purchased Software. The definition of exports and imports of software was expanded to more accurately reflect the international trade of software. The present methodology includes only those exports and imports captured in the data on trade in goods from the Census Bureau. Estimates of royalties and license fees for electronically transmitted software are included in the exports and imports of services estimates and should be included in the commodity flow for estimating fixed investment in software. Until 1997, however, these royalties and license fees were not separately identifiable in the foreign trade data. Improved Estimates of Own-Account Software. Own-account software estimates in the benchmark 1997 I-O table include the incorporation of both finer levels of detail and more complete information from the 1997 economic censuses than was available from the annual surveys for 1997, as well as additional and more detailed data available from the BLS on an annual basis, beginning with 1997. In addition, in-house expenditures for software originals whose purpose is to be reproduced were recognized as private fixed investment for the first time.
388
Bruce T. Grimm, Brent R. Moulton, and David B. Wasshausen
Improvements in the I-O estimates for own-account software are in three areas. First, new data are available from BLS that separately identify the number of computer system analysts excluding computer engineers and computer scientists; previously, these occupational categories had been combined. The exclusion of computer engineers and computer scientists results in a more accurate measure of the number of persons who are predominantly engaged in the creation of own-account software. Second, adjustments to reduce the total number of computer programmers and systems analysts (in order to avoid double-counting work performed by some of these employees to create embedded software or software produced for sale) were estimated from three-digit detail; previously, they were estimated primarily from two-digit detail. The result was a more finely tuned set of estimates of the number of computer programmers or computer systems analysts who are creating investment in own-account software rather than software to be sold or embedded in or bundled with sales of other goods. Third, the BLS now publishes estimates of mean wages of computer programmers and computer systems analysts by industry; previously, only median wages had been published. Further, the Census Bureau now publishes estimates of both total costs and wage costs for the custom and prepackaged software industries. These allow a more accurate and more direct calculation of the costs of producing own-account software investment. In particular, the multistep process previously used in order to go from wages to compensation to total costs was replaced by a one-step process that uses this information about total costs versus wage costs in the programming industries. The previous methodology used a blow-up factor to go from compensation to costs that was based on a national average that includes manufacturing firms as well as software firms, and thus included industries with widely differing proportions of indirect costs to compensation costs. Estimates for own-account software investment in the benchmark 1997 I-O table also include expenditures for software originals whose purpose is to be reproduced; previously, these expenditures were excluded from ownaccount software investment. The rationale behind the new treatment is that when a company produces an original software application (e.g., Windows 2000) it has produced a fixed asset with an expected service life. This asset can then be used to create copies of the software application, which are separate assets. Therefore two separate assets have been measured: 1) the original is an asset used by the software original producing company to make copies, 2) the copies are assets used by the software purchasing companies to perform their production activity. This treatment is consistent with the guidelines of the 1993 System of National Accounts, which recommends that software originals and copies of the original be recorded as investment.24 24. For more information, see Commission of the European Communities et al. (1993).
IP Equipment and Software in the National Accounts
389
2001 NIPA Annual Revision Several methodological changes were introduced as part of the 2001 annual revision of the NIPAs that led to improved estimates for IP equipment and software investment:
• Improved Methodology for Estimating Quarterly Fixed Investment in Purchased Software. The quarterly estimates of fixed investment in prepackaged and in custom software were improved because the estimates of prepackaged software are now interpolated and extrapolated using data on receipts from company reports to the Securities and Exchange Commission (SEC) and data on monthly retail sales of business software from a trade source. In addition, the estimates of custom software are now interpolated and extrapolated using the SEC data. Previously, the quarterly estimates of prepackaged software and of custom software were interpolated and extrapolated using BLS tabulations of state unemployment insurance data on wages and salaries in the prepackaged software and computer programming services industries (SIC 7372 and SIC 7371, respectively). However, the Census Bureau SAS continues as the primary data source for the annual estimates of prepackaged and custom software. The improved quarterly extrapolators are conceptually more consistent with the SAS receipts data than the previously used quarterly extrapolators. • Incorporated Newly Available Price Index from the FRB that Reflects Quality Improvements to LAN Equipment. As described previously, a newly available price index from the FRB that reflects quality improvement to LAN equipment—routers, switches, and hubs—is now used in the deflation of communication equipment investment. The improved deflator, which is a weighted geometric mean of the FRB LAN equipment price index and the PPI for telephone and telegraph apparatus, is now used to deflate the component of communication equipment that reflects LAN equipment (see table 10.9); previously, the PPI for telephone and telegraph apparatus was used to deflate this component. • Improved Methodology for Estimating Price Index Used to Deflate Fixed Investment in Custom Software. An improved price index is now used in the deflation of custom software that is based on a weighted average of the own-account software price index and the PPI for prepackaged software applications sold separately (nonsuite).25 The use of the index for nonsuite applications more appropriately reflects the type of existing programs or program modules that are often incorporated into custom software. Previously, the PPI for all 25. A weighted average is used because custom software consists of a mixture of new programming and existing programs or program modules (including prepackaged software) that are incorporated into new systems.
390
Bruce T. Grimm, Brent R. Moulton, and David B. Wasshausen
prepackaged software applications was used, together with the ownaccount software price index, to deflate custom software. 10.4.2 Plans for Improvement In the next comprehensive revision of the NIPAs (scheduled to be released in late 2003), BEA plans to incorporate information from the 1997 benchmark I-O table and hopes to make additional improvements to the price estimates for custom and own-account software, photocopying equipment, medical equipment, and telecommunications transmission equipment. Improved Current-Dollar Estimates: Improved Source Data The administration’s budget for fiscal year (FY) 2003 includes two census initiatives which, if funded, could significantly improve the measurement of private fixed investment in IP equipment and software. The initiatives’ focus is in large part on information- and technology-related services improvements, including adding new industries and information to existing annual surveys and introducing a quarterly services survey.
• Quarterly services indicator. The Census Bureau has proposed collecting and publishing quarterly industry receipts for selected industries, including NAICS categories 5112, “Software publishers,” and 5415, “Computer systems design and related services.” These industries consist of establishments that are primarily engaged in producing prepackaged and custom software. Presently, the only representative government survey of the industries that produce prepackaged and custom software is the SAS; quarterly or monthly data are not available (except for the information that can be gleaned from financial statements of publicly held corporations). The availability of such data would greatly improve the accuracy of the NIPA quarterly estimates by providing a more reliable measure of quarterly receipts for softwareproducing industries based on a much larger and more representative sample. (For more information on how these quarterly estimates are presently prepared, please see first bullet in the “2001 NIPA Annual Revision” section above.) Scheduled proposed collection begins in first quarter 2004, collecting data for the fourth quarter of 2003. • Annual coverage of e-business infrastructure. This initiative would significantly augment information presently available from the ACES and could significantly improve our annual estimates for IP equipment and software investment. Two major changes affecting IP equipment and software investment are proposed in this component of the ebusiness initiative: 1. To respond quickly to data user needs, a new question for capitalized software will be proposed for the ACES. Beginning with an-
IP Equipment and Software in the National Accounts
391
nual data for 2001, national totals for total capitalized software, capitalized prepackaged software, capitalized custom software, and capitalized own-account software would be available. 2. Beginning with annual data for 2003, and thereafter, national totals for capitalized and expensed information and communication technology (ICT) equipment and software would be collected and published.26 Although there are no plans to replace supply-side (commodity-flowbased) estimates with demand-side estimates, the detailed annual ACES estimates for capitalized and expensed ICT equipment and software would serve as an excellent check and could provide a sound basis for judgmental adjustments as needed. In addition, the availability of these new data would help in producing more accurate estimates of investment by industry. Improved Price Indexes Own-Account and Custom Software. The price index for own-account software is a BEA input cost index consisting of compensation cost indexes and an intermediate inputs cost index. The use of input costs assumes that there are no changes in productivity of computer programmers and systems analysts. Because custom software consists of a mixture of both new and existing programs or program modules, including prepackaged software that is incorporated into new systems, the price index for custom software is a weighted average of the price indexes for business own-account software and for prepackaged software. The BEA is investigating an alternative approach for estimating price indexes for own-account and custom software that uses a metric referred to as “function points.” This approach could take into account changes in productivity of computer programmers and systems analysts. Function points (FPs) measure software by quantifying its functionality provided to the user based primarily on the logical design.27 Data on average cost per function point are available from trade sources and may prove to be useful in preparing a price index for own-account and custom software. McKinsey Global Institute prepared an alternative software price index using FPs, and the BEA will continue to evaluate their research.28 Photocopying Equipment, Medical Equipment, and Telecommunications Transmission Equipment. The BEA plans to conduct research on explicit quality adjustment for several products within IP equipment and software. 26. The ICT infrastructure includes expenditures on equipment (such as computers and peripherals); buildings and structures (such as server farms and digital transmission towers) and their maintenance; software; and related services (such as programming and network support staff supporting ICT equipment and structures). 27. For more information on function points, see Longstreet (2001). 28. For more information, see McKinsey Global Institute (2001).
392
Bruce T. Grimm, Brent R. Moulton, and David B. Wasshausen
These include photocopying equipment, medical equipment, and telecommunications transmission equipment. Presently, these products are deflated using PPIs and IPIs. In addition to the BEA’s work, the BLS has been studying switches and routers and exploring the possible hedonic methods for quality adjusting prices for these goods. The FRB’s staff has begun work on some other communications equipment prices, concentrating on fiber optics.29 If successful, these studies may lead to additional or improved quality-adjusted price indexes. In particular, the very rapid rate of increase of maximum telephone transmission rates suggests that substantial quality improvements have taken place.30 Presentational Improvements The BEA plans to feature a new page on its web site entitled, “Prices and Output for Information and Communication Technologies.” The new page will contain data tables (both previously available and newly available tables), Survey of Current Business articles, BEA papers and presentations, and miscellaneous materials pertaining to prices and output for information and communication technologies. Presently, many but not all of these items are available on the BEA web site in a variety of locations. The new site will serve as a “one-stop shop” for these products. Featured Data Tables. Several unpublished data tables will be posted on the new Internet page showing real and current-dollar estimates. These will include tables showing final sales of computer hardware, of computer software, and of communication equipment and information on hardware and software prices. Long-Term Commitment to Improvement Our experience with IP equipment and software has taught us a number of lessons about the necessity of paying special attention to developments in technology and their impact on economic statistics. The BEA is committed to continuing work on improving the measurement of IP equipment and software and other types of high-tech investment. This work will need to be a cooperative endeavor by all of the major statistical agencies. We will continually need to identify new technologies and products and update our classification frameworks to identify them. Resources will need to be devoted to tracking and adjusting prices for quality change. We will need to work with industry, including participation in technical symposia, to develop the expertise to understand and monitor new developments in this area. We also need to work closely with researchers outside the statistical 29. See Doms (2002). 30. See Aron, Dunmore, and Pampush (1997) and Banks (1997).
IP Equipment and Software in the National Accounts
393
agencies who are studying the effects of these changes on productivity and on industrial organization. 10.5 Summary There is evidence that investment in IP equipment and software had a significant role in an acceleration in both real GDP and in labor productivity in the second half of the 1990s. In view of the increased importance of IP equipment and software as a type of investment, the BEA anticipates that it will continue to play an important role in the future. The BEA recognizes the importance of accurately measuring investment in this category, including both estimates of prices and of current-dollar expenditures. Several improvements have recently been incorporated into the estimates for IP equipment and software, and the BEA continues to recognize the importance of pursuing future improvements in the measurement of these estimates. The BEA’s strategic plan calls for continued work on improvements in the source data, improvements in the methods used for estimation of software, and continued work on developing quality-adjusted price indexes and improved measures of high-tech services. In addition, the BEA plans to continue working with the Census Bureau and the BLS to support initiatives by those agencies that will lead to more accurate or more timely data for IP equipment and software investment. Furthermore, the BEA hopes to take a proactive role in identifying new developments in technology that might lead to earlier incorporation of new products in the national accounts and in other government surveys.
Postscript: June 2004 In the time elapsed since the “final” draft of this paper was prepared in March 2003, the BEA has published revised estimates for private fixed investment in IP equipment and software as part of the 2003 comprehensive revision of the NIPAs.31 Improvements discussed in the “1997 Benchmark I-O Table” section have been incorporated into the NIPAs, and many of the improvements discussed in the “Plans for Improvement” section have also been implemented. It is the authors’ intent in this postscript to bring the reader up to date with respect to recent developments, including 31. On December 10, 2003, the BEA released the initial results from the twelfth comprehensive revision of the NIPAs, beginning with revised estimates for 1929; the results of the last comprehensive revision were released in October 1999. See Moylan and Robinson (2003) and Seskin and Larkins (2004).
394
Bruce T. Grimm, Brent R. Moulton, and David B. Wasshausen
methodological improvements, and to discuss our present analysis of using “function points” to estimate software price indexes. Improved Current-Dollar Estimates We discuss two additional improvements here. Prepackaged and Custom Software Receipts for prepackaged and custom software are now extrapolated from the 1997 benchmark I-O estimates using detailed sources of revenue from the Census Bureau’s SAS for years 1998 through 2001. Previously, receipts for nonbenchmark years for prepackaged and custom software were extrapolated using industry receipts from the SAS. Using detailed sources of revenue provides a more accurate estimate of the receipts for these two products. Systems Integrators Computer systems integrators plan and design computer systems that integrate computer hardware, software, and communication technologies for their customers. A new asset type for systems integration services was introduced in both the 1997 benchmark I-O accounts and in the NIPA 2003 comprehensive revision. Previously, systems integration services were treated as intermediate consumption.32 In addition, a judgmental estimate for custom software receipts from the sale of integrated or turnkey systems was introduced into the commodity-flow estimate for custom software. Improved Price Indexes The price indexes for own-account software and photocopiers have been improved. We had initially hoped to use FPs to improve our software price indexes; however, at this point we are not pursuing their use. A detailed discussion is provided here. Own-Account Software In the paper’s “Improved Price Indexes” section we discussed the fact that the price index for own-account software was a BEA input cost index consisting of compensation cost indexes and an intermediate inputs cost index. The sole use of input costs implicitly assumes that there are no changes in productivity of computer programmers and systems analysts. In the 2003 comprehensive revision we addressed this weakness by using a weighted average of the input cost index (described previously) with the NIPA prepackaged software price index. As a result, the price indexes for custom and own-account software are now identical, and both reflect a price component that allows for changes in productivity. 32. See Lawson et al. (2002).
IP Equipment and Software in the National Accounts
395
We researched the possibility of making an explicit productivity adjustment to the input-cost index. Two sources for deriving such an adjustment were explored: published BLS labor productivity measures for prepackaged software, and estimated productivity measures using FPs data set. However, we decided that more research was necessary prior to adopting this somewhat unconventional approach. Looking forward, we have decided to look outside of the BEA for help in estimating quality-adjusted price indexes for custom and own-account software. We are presently in the process of preparing a request for proposals, which will lead to a competitive selection of a contractor. Function Points In the “Improved Price Index” section we specifically discuss the possibility of using FPs to improve estimates of price indexes for own-account and custom software. Recall that FPs measure software by quantifying its functionality provided to the user based primarily on the logical design of the software application. The BEA purchased a fairly comprehensive FP database, spanning fourteen years with over 2,000 observations. Although the data set contained a plethora of variables, it did not contain reliable measures of project costs or prices, and therefore it was not possible to construct a price index directly from the FP data set. The FP data set did, however, support measures of productivity by project, measured in hours per FP. Several different annual productivity estimates were prepared from this data set, and we attempted to make a productivity adjustment to the inputcost index. Unfortunately, the productivity measures resulting from the FP data set were extremely volatile and therefore deemed unreliable. In addition to some of the difficulties described above, there does not appear to be a clear consensus on the usefulness of FPs as a metric of software utility. Many software experts argue that just because a given software application has more FPs than another, it does not necessarily mean the application provides more utility. Because of these concerns and limitations, we have decided to not pursue the use of FPs at this time. Photocopiers A BEA quality-adjusted hedonic price index for photocopiers was introduced in the comprehensive revision beginning with 1992. Currentdollar estimates for private fixed investment in photocopiers are relatively flat from the 1992 to 1997 benchmark I-O estimates, and NIPA estimates have been falling steadily since 1997. We believe this trend is due, at least in part, to declining quality-adjusted prices, which were not reflected in the previously published price estimates. The improved price index has an average annual rate of decline of about 7 percent, whereas the previously published price index had an average annual increase of almost 2 percent. Photocopying equipment has improved significantly over time, much like
396
Bruce T. Grimm, Brent R. Moulton, and David B. Wasshausen
computers and printers, and thus is a good candidate for quality adjustments to their prices by hedonic methods. The improved price index is a regression price index that was constructed using the coefficients of year dummy variables. Biennial regressions were run using a log-log function form, with price a function of copy speed, color capability, multifunctionality, recommended maximum copier volume (measured in copies per month), and year of model introduction. The sample consisted of over 1,400 observations spanning years 1992– 2001. Estimates for 2002 and 2003 were extrapolated using the PPI for “photographic equipment and supplies” with a 3.1 percent downward biasadjustment. The bias adjustment is equal to one-half the 6.3 percent per year difference between the PPI for “photographic equipment and supplies” and the BEA’s hedonic index for photocopiers for 1992–2001. These extrapolated estimates will be replaced with hedonic price indexes in subsequent annual and benchmark revisions as data become available.
References Aizcorbe, Ana, Carol Corrado, and Mark Doms. 2000. Constructing price and quantity indexes for high technology goods. Paper presented at the CRIWNBER Summer Institute 2000 Workshop on Price, Output, and Productivity Measurement. 31 July, Cambridge, Massachusetts. Available at http://www.nber .org/~confer/2000/si2000/doms.pdf. Aron, Debra, Ken Dunmore, and Frank Pampush. 1997. The impact of unbundled network elements and the Internet on telecommunications access infrastructure. Harvard Information Infrastructure Project. Manuscript, December. Available at http://www.ksg.harvard.edu/iip/iicompol/Papers/Pampush.html. Banks, Howard. 1997. The law of the photon. Forbes, October. Bureau of Economic Analysis (BEA), U.S. Department of Commerce. 1998. Methodology and appendixes, benchmark input-output accounts of the United States, 1992. Washington, DC: GPO. ———. 2001. National income and product accounts of the United States, 1929–97. Vol. 1. Washington, DC: GPO. Brynjolfsson, Eric, and Chris F. Kemerer. 1996. Network externalities in microcomputer software: An econometric analysis of the spreadsheet market. Management Science 42:1627–47. Cartwright, D. W. 1986. Improved deflation of purchases of computers. Survey of Current Business 66 (March): 7–9. Cole, R., Y. C. Chen, J. Barquin-Stolleman, E. Dulberger, N. Helvacian, and J. H. Hodge. 1986. Quality-adjusted price indexes for computer processors and selected peripheral equipment. Survey of Current Business 66 (January): 41–50. Commission of the European Communities/Eurostat, the International Monetary Fund, the Organization for Economic Cooperation and Development, the Statistics Division of the former Department for Economic and Social Information and Policy Analysis and the regional commission of the United Nations Secretariat, and the World Bank. 1993. System of national accounts 1993. New York: United Nations.
IP Equipment and Software in the National Accounts
397
Doms, Mark E. 2002. Communications equipment: What has happened to prices? Paper presented at the National Bureau of Economic Research, Conference on Research in Income and Wealth, Measuring Capital in the New Economy. 26–27 April, Washington, DC. Doms, Mark E., and Christopher M. Forman. 2001. Prices for local area network equipment. Paper presented at the Brookings Program on Productivity in the Services Sectors: Workshop on Communications Output and Productivity. 23 February, Washington, DC. Dulberger, Ellen. 1993. Sources of price decline in computer processors: Selected electronic components. In Price measurements and their uses, ed. Murray Foss, Marilyn Manser, and Allan Young, 103–24. Chicago: University of Chicago Press. Gandal, Neil. 1994. Hedonic price indexes for spreadsheets and an empirical test for network externalities. Rand Journal of Economics 25 (Spring): 164–70. Gordon, Robert J. 1999. Has the “new economy” rendered the productivity slowdown obsolete? Northwestern University, Department of Economics. Manuscript, June. ———. 2000. Tables to accompany discussion of Stephen P. Oliner and Daniel E. Sichel, “The resurgence of growth in the late 1990s: Is information technology the story?” Conference on Structural Change and Monetary Policy. 3 March, Federal Reserve Board of San Francisco. Grimm, Bruce. 1998. Price indexes for selected semiconductors, 1974–96. Survey of Current Business 78 (February): 8–24. Holdway, M. 2001. Quality adjusting computer prices in the producer price index: An overview. Washington, DC: Bureau of Labor Statistics. Available at http:// www.bls.gov/ppicomqa.htm. Jorgenson, Dale W. 2001. Information technology and the U.S. economy. American Economic Review 91 (March): 1–32. Jorgenson, Dale W., and Kevin J. Stiroh. 2000. Raising the speed limit: U.S. economic growth in the information age. Brookings Papers on Economic Activity, Issue no. 1:125–211. Landefeld, J. Steven. 2002. BEA’s strategic plan for 2001–2005. Survey of Current Business 82 (May): 8–32. Landefeld, J. Steven, and Bruce Grimm. 2000. A note on the impact of hedonics and computers on real GDP. Survey of Current Business 80 (December): 17–22. Lawson, Ann M., Kurt S. Bersani, Mahnaz Fahim-Nader, and Jiemin Guo. 2002. Benchmark input-output accounts of the United States, 1997. Survey of Current Business 82 (December): 19–38. Liegey, Paul R., and Nicole Shepler. 1999. Adjusting VCR prices for quality change: A study using hedonic methods. Monthly Labor Review 122 (September): 22–37. Longstreet, David. 2001. Function point training and analysis manual. Blue Springs, MO: Longstreet Consulting Inc. Available at http://www.SoftwareMetrics.Com/ freemanual.htm (accessed August 2001). McKinsey Global Institute. 2001. US productivity growth 1995–2000: Understanding the contribution of information technology relative to other factors. Measurement appendix. Available at http://www.mckinsey.com/knowledge/mgi/ productivity/index.asp (accessed October 2001). Moulton, Brent. 2001. The expanding role of hedonic methods in the official statistics of the U.S. Paper presented to BEA’s Advisory Committee. 30 November, Washington, DC. Available at http://www.bea.doc.gov/bea/about/expand3.pdf. Moulton, Brent R., Timothy J. LaFleur, and Karin E. Moses. 1999. Research on improved quality adjustment in the CPI: The case of televisions. In Proceedings
398
Bruce T. Grimm, Brent R. Moulton, and David B. Wasshausen
of the fourth meeting of the International Working Group on Price Indexes, ed. W. Lane, 77–99. Washington, DC: Bureau of Labor Statistics. Available at http:// www.statcan.ca/secure/english/ottawagroup/pdf/18moul98.pdf. Moylan, Carol. 2001. Estimation of software in the U.S. national income and product accounts: New developments. OECD discussion paper. Paris: OECD, September. Available at http://webnet1.oecd.org/doc/M00017000/M00017821.doc. Moylan, Carol E., and Brooks B. Robinson. 2003. Preview of the 2003 comprehensive revision of the national income and product accounts. Survey of Current Business 83 (September): 17–32. Nordhaus, William D. 2001. Productivity growth and the new economy. NBER Working Paper no. 8096. Cambridge, MA: National Bureau of Economic Research. Oliner, Stephen D., and Daniel E. Sichel. 1994. Computers and output growth revisited: How big is the puzzle? Brookings Papers on Economic Activity, Issue no. 2:299–301. ———. 2000. The resurgence of growth in the late 1990s: Is information technology the story? Journal of Economic Perspectives 14 (4): 3–22. Parker, Robert P., and Bruce Grimm. 2000. Recognition of business and government expenditures for software as investment: Methodology and quantitative impacts, 1959–98. Paper presented to BEA’s Advisory Committee. 5 May, Washington, DC. Available at http://www.bea.doc.gov/bea/papers/software.pdf. Seskin, Eugene P., and Daniel Larkins. 2004. Improved estimates of the national income and product accounts for 1929–2002. Survey of Current Business 84 (February): 7–29. Triplett, J. E. 1986. Economic interpretation of hedonic models. Survey of Current Business 66 (January): 36–40. Wasshausen, David. 2002. Computer prices in the national income and product accounts. Draft methodology paper. Washington, DC: Bureau of Economic Analysis.
Comment
Barry Bosworth
We often hear the refrain “Our economic system is undergoing rapid change, and the statistical systems are failing to keep up and adapt to that change.” But the fact that the phrase has become a cliché does not make it true. This paper provides as excellent example of the counter case. The statistical treatment of high-technology products represents a remarkably good example of a situation in which the U.S. statistical agencies moved quickly and aggressively to incorporate new information, not just into the national accounts but all through the whole statistical system. And I think that’s a big plus. The new technologies have been included within the statistical reporting system; but probably more important, new methodologies are being used to evaluate them. Barry Bosworth is a senior fellow in the Economic Studies Program (the Robert V. Roosa Chair in International Economics) at the Brookings Institution.
IP Equipment and Software in the National Accounts
399
This paper addresses a couple of features of the statistical system that are important to keep in mind in evaluating future changes. First, the construction of the national accounts is very much supply-side based in that it is anchored to a set of I-O accounts for the economic census years that trace out the flow of goods through the economic system. Most of the information comes from suppliers, very little from buyers. As the authors note, the supply-side nature of the basic data, for example, provides the basis for estimating the composition of the capital stock and the relative importance of high-tech capital. Because they do constitute the basic building blocks, it is important to try to improve the quality of those I-O tables. To a large extent, the annual national accounts are simply an interpolation and extrapolation of the I-O tables from each five-year economic census. Second, the majority of the information on production and sales is provided in nominal terms, creating a need for a parallel system for collecting information on prices. Again, the U.S. statistical system has been very active in exploring alternative methodologies for constructing price indexes for high-tech products. In this regard the authors’ argument that matchedmodel and hedonic approaches yield similar estimates of price change is reassuring: the construction of hedonic price indexes remains in some respects more of an art than a science, and it would be valuable to have two alternative methodologies as a check on the final indexes. However, some recent papers have suggested that the two approaches can yield significantly different results. Producers may not fully adjust prices on existing models to meet the competition of new models, preferring instead to let sales of the old model fade away. The largest gaps in information on hightech products appear to be in the prices of communications equipment. While some recent studies have constructed price indexes for various types of communications equipment, the coverage is still very incomplete. Third, I thought the paper clearly showed the value of introducing computer software into the accounts as a component of investment and final demand. But in reviewing the work required to accomplish that objective (the measurement of price change and depreciation), and after listening to the discussion in previous sessions of this conference, I am left with the conclusion that we are not ready to try to put intangibles into the national accounts. What we should be looking for at this stage is the development of additional data on intangibles in satellite-type accounts. The next step in the improvement of the national accounts depends very much on the purposes for which they will be used. In an earlier session, someone emphasized the role of the national accounts in measuring business activity, stressing their importance as a guide for stabilization policy. The recent changes in the national accounts have been critical from that perspective. The year 2001 was something close to a depression in the hightechnology sector, a recession for manufacturing, and a blip for the rest of the economy. But the slowdown would look much different in the absence
400
Bruce T. Grimm, Brent R. Moulton, and David B. Wasshausen
of the changes in the measurement of the output of the high-technology sector. But a second major use of the national accounts data is centered around issues affecting productivity and the sources of long-term growth. And, from that perspective, the greatest future gains are likely to come from some degree of disaggregation. We’ve done about as much as we can in examining the aggregate context for growth. We need some industry detail, and the BEA has been accommodative, with an expanded set of industry output accounts. We now have measures of output and purchased inputs, as well as value added. Those are all steps in the right direction. The major problem goes back to the beginning of this paper. We have a remarkable degree of detail about investment and the stock of capital by type. We have that detail because the accounts are supply based in collecting lots of information from producers. But we know almost nothing about who uses it. That is a major drawback if we want to allocate the capital stock by industry and perhaps examine its impact on productivity in the industries that are major users of the new technologies. The BEA produces a table showing the allocation of different types of capital by using industry, but it is based on very little information and it is badly out of date: the latest information applies to 1992. I believe that is one of the major gaps in our current efforts to evaluate the impact of technological changes on the economy. An improved capital-flow table should be a major future priority for the statistical system. Finally, I would like to take a moment to make some international comparisons that go to the issue of comparability between the data for the United States and other Organization for Economic Cooperation and Development (OECD) countries. This is relevant because the claim has been made that the higher recent rate of U.S. growth is simply the result of statistical reporting differences—larger adjustments for quality change in high-technology goods. The United States does make greater use of quality adjustments in the construction of the price indexes, raising the estimate of output growth. But it is also important to note that the United States uses chain-weighted price indexes, as opposed to the fixed-weight indexes that are still most common in other countries. To a first approximation chain-weighting and the quality adjustments offset each other. The net result is far less difference than is frequently claimed. Paul Schreyer and others at the OECD have made considerable progress in providing comparable measures of the role of high technology in the member countries. As shown in figure 10C.1, the United States does devote a larger share of investment to high technology products, but the share has increased in all of the Group of Seven (G7) countries. The data are all in nominal terms and do not involve any issue of price measurement. It is also useful to examine the contribution that this high-tech capital
IP Equipment and Software in the National Accounts
401
Fig. 10C.1 Percentage share of ICT equipment and software in total nonresidential investment Source: Colecchia and Schreyer (2001).
Fig. 10C.2
Percentage point contribution of total ICT to output growth
Source: Colecchia and Schreyer (2001).
has made to output growth in each country. Again, in figure 10C.2 there is an acceleration in all of the countries after 1995, but it is much larger for the United States. In part, this is the result of differences in the price measures; but it is also a reflection of the larger share of investment going to high-tech products in the United States and the fact that Europe is not a major producer of high-tech capital—increases in investment spending are absorbed by greater imports. Europe is a bigger player in the market for communications equipment, particularly wireless; and improvements in the measurement of the prices of communications capital might alter the comparison somewhat. In summary, information and communications technology (ICT) has been a major contributor to growth in the United States, but it has also added to output in other countries.
402
Bruce T. Grimm, Brent R. Moulton, and David B. Wasshausen
Reference Colecchia, Alessandra, and Paul Schreyer. 2001. ICT investment and economic growth in the 1990s: Is the United States a unique case? A comparative study of nine OECD countries. STI Working Paper no. 2001/7. Paris: Organization for Economic Cooperation and Development, Directorate for Science, Technology, and Industry, October.
11 Growth of U.S. Industries and Investments in Information Technology and Higher Education Dale W. Jorgenson, Mun S. Ho, and Kevin J. Stiroh
11.1 Introduction Our objective in this paper is to quantify the sources of U.S. economic growth for 1977–2000 using data for individual industries. This “bottomup” approach complements the “top-down” analysis approach employed in our earlier work.1 Industry-level data enable us to trace the sources of U.S. economic growth to their industry origins, to isolate and analyze the industries that produce information technology (IT), and to assess the relative importance of total factor productivity growth and factor accumulation at both industry and economywide levels.2 Productivity growth in IT-producing industries has steadily risen in importance, generating a relentless decline in the prices of IT equipment and software. This decline in IT prices is rooted in developments in technology Dale W. Jorgenson is the Samuel W. Morris University Professor in the Department of Economics at Harvard University. Mun S. Ho is a visiting scholar at Resources for the Future. Kevin J. Stiroh is assistant vice president in the Banking Studies Function at the Federal Reserve Bank of New York. We are grateful to Jon Samuels for excellent research assistance and Conference on Research in Income and Wealth participants, especially our discussant Michael Harper, the editors, and two anonymous referees for helpful comments and useful suggestions for the revisions of this paper. Many have generously provided data and advice—Jay Berman, Charles Bowman, Carl Chentrens, James Franklin, Tom Hale, Tom Nardone and Larry Rosenblum, Kurt Kunze, Eldon Ball, Mike Dove, and Scott Segerman. The views expressed are those of the authors and do not necessarily reflect those of the Federal Reserve System or the Federal Reserve Bank of New York. 1. See Jorgenson and Stiroh (2000), Jorgenson (2001), and Jorgenson, Ho, and Stiroh (2002), as well as Baily (2002), Congressional Budget Office (2002), Council of Economic Advisors (2002), Gordon (2002), McKinsey Global Institute (2001), Oliner and Sichel (2000, 2002), and Whelan (2002). 2. Baily and Lawrence (2001) and Stiroh (2002) provide industry comparisons that examine the role of IT, and Brynjolffson and Hitt (2000) survey firm-level results.
403
404
Dale W. Jorgenson, Mun S. Ho, and Kevin J. Stiroh
that are widely understood by technologists and economists, particularly the continuous improvement in the performance-price ratio of semiconductors captured by Moore’s Law (Jorgenson 2001). Information technology has reduced the cost and improved the performance of products and services embraced by U.S. businesses, households, and governments. The enhanced role of investment in IT is a conspicuous feature of the U.S. economy, and a growth revival is under way in many important IT-using industries. The key IT-producing industries are computers, communications equipment, semiconductors, and software. These are below the usual two-digit Standard Industrial Classification (SIC) level used in other studies, and we have created detailed data for them in order to characterize IT production as precisely as possible. Industrial Machinery and Equipment (SIC 35), for example, includes Computers and Office Equipment (SIC 357), while Electronic and Electrical Equipment (SIC 36) contains Communications Equipment (SIC 366) and Electronic Components (SIC 367), mainly semiconductors. Business Services (SIC 73) includes Computer Services (SIC 737), primarily software. This breakdown enables us to quantify the impact of IT production on the U.S. economy more accurately and represents a substantial advantage over earlier studies using the broader industry aggregates. The mechanisms for diffusion of advances in IT are twofold. First, advances in semiconductors generate continuing price reductions for a given level of performance. These price reductions drive demands for intermediate inputs in semiconductor-using industries such as computers, communications equipment, and a host of others. Second, the industries that use semiconductors as inputs generate further price declines that drive investments in IT equipment like computers and telecommunications equipment. Advances in equipment production augment the downward pressure on prices, steadily redirecting the rising IT investment flow toward its most productive uses.3 On the labor side, college-educated workers are often identified as “knowledge workers” who make use of IT equipment and software, so we have divided labor input between college and noncollege workers. Obviously, not every knowledge worker is a college graduate, nor is every college graduate a knowledge worker. A second issue is that while investments in IT take place within the industries we have identified, investments in higher education are undertaken by future college-educated workers and precede employment of these workers, sometimes by a considerable period of time. We measure the flow of human capital services from collegeeducated workers, but not the accumulation of the stock of this capital that takes place in colleges and universities. 3. Models of the interactions among semiconductor, computer, and other industries are presented by Dulberger (1993), Triplett (1996), and Oliner and Sichel (2000).
Investments in Information Technology and Higher Education
405
Our empirical results show that the appropriate framework for analyzing the impact of IT is a production possibility frontier that includes outputs of IT-producing industries, as well as inputs of IT capital services, rather than the usual aggregate production function. Two advantages of this framework are that we incorporate detailed data from individual industries, including the IT-producing industries, and that the prices of IT outputs and inputs are linked through the prices of IT capital services. Finally, we report detailed results for industry production accounts and demonstrate the critical importance of investments in IT capital and higher education across U.S. industries. Table 11.1 summarizes the results of a growth-accounting decomposition of aggregate value added in the U.S. economy using a production possibility frontier that is generated directly from industry-level data. This decomposition is based on annual observations for the period 1977–2000, as well as the subperiods 1977–90, 1990–95, and 1995–2000. While these time periods are conventional, a word about the choice of the periods is useful. The first year in our data set is 1977, and the slow growth associated with the energy crisis of the 1970s is readily apparent in the data. The end of the last recession in our sample is 1991, and 1995 marks the surge in economic growth and jump in the rate of decline of IT prices. Results show that recent resurgence of the U.S. economy raised the growth rate of value added by 1.85 percentage points when 1995–2000 is compared to 1990–95. Capital input contributes 1.02 percentage points to the post-1995 revival, with a little more than half of this due to the surge in IT investment. Labor input contributes 0.44 percentage points to the growth resurgence. College-educated workers contribute about a third of a percentage point, while noncollege workers add another tenth of a percentage point to the growth resurgence. Faster growth in total factor productivity contributes the remaining 0.40 percentage points. These bottom-up results are quite comparable to the top-down estimates mentioned earlier. The worth of underlying industry detail, however, is also readily apparent in table 11.1, which documents the rising contributions of ITproducing industries to U.S. economic growth. These industries have been steadily growing in importance, and their contribution to growth jumps by 75 percent after 1995. The contribution of IT inputs into production more than doubles after 1995 as the pace of IT price declines accelerated. In response to these price declines, firms, households, and governments accumulate IT equipment and software much more rapidly than other forms of capital. Non-IT capital, however, still predominates in the contribution of capital input and remains the most important source of U.S. economic growth throughout the period 1977–2000. The contribution of labor input is next in importance as a source of U.S. economic growth and plays a vital role in the resurgence of the American economy after 1995. The contribution of college-educated workers domi-
406
Dale W. Jorgenson, Mun S. Ho, and Kevin J. Stiroh
Table 11.1
Growth in aggregate value added and its sources 1995–2000 1977–2000 1977–90 1990–95 1995–2000 less 1990–95
Value-added IT Non-IT Capital input IT Non-IT Labor input College Non-college Aggregate TFP
Contributions 3.08 2.93 0.43 0.33 2.65 2.60 1.74 1.73 0.65 0.52 1.09 1.22 1.19 1.27 0.72 0.84 0.48 0.43 0.14 –0.08
2.35 0.40 1.95 1.25 0.55 0.71 0.86 0.38 0.49 0.23
4.20 0.70 3.50 2.27 1.11 1.15 1.30 0.72 0.59 0.63
1.85 0.31 1.55 1.02 0.57 0.45 0.44 0.34 0.10 0.40
Contribution of capital quality Contribution of capital stock Contribution of labor quality Contribution of labor hours
Addendum 0.78 0.75 0.96 0.98 0.22 0.24 0.97 1.03
0.53 0.73 0.24 0.63
1.13 1.13 0.17 1.13
0.61 0.41 –0.06 0.51
Growth rates 3.08 2.93 17.97 17.01 2.71 2.64 4.12 4.11 16.22 16.89 2.85 3.12 2.07 2.21 3.82 4.83 1.30 1.13
2.35 16.08 1.99 2.99 11.43 1.91 1.47 1.73 1.33
4.20 22.37 3.60 5.25 19.28 3.09 2.30 3.26 1.70
1.85 6.29 1.61 2.26 7.86 1.18 0.82 1.53 0.37
Value-added IT Non-IT Capital input IT Non-IT Labor input College Non-college
Notes: All figures are average annual percentages. The contribution of an output or input is the growth rate multiplied by the average value share. IT value added includes production in SIC #357, #366, #367, and #737. IT capital input includes computer hardware, computer software, and telecommunications equipment.
nates the growth of labor input during the period 1977–2000, even though these workers are less numerous than noncollege workers. This reflects the facts that college-educated workers have higher marginal products on average, as can be seen in the college wage premium, and that the number of college-educated workers has been growing more rapidly than that of noncollege workers. Both college and noncollege workers, however, contribute to the growth revival after 1995. As the unemployment rate fell in the late 1990s, workers with a wide variety of levels of education and experience entered the ranks of the employed labor force. Finally, growth in total factor productivity (TFP) is the smallest of the three proximate sources of U.S. economic growth—capital input, labor input, and TFP. In fact, TFP growth essentially disappears during 1977–90.
Investments in Information Technology and Higher Education
407
During the past decade, TFP growth revives modestly during the subperiod 1990–95 and accelerates considerably after 1995. As an addendum to table 11.1, we have divided the contribution of capital input between the contributions of capital stock and capital quality. The growth of capital quality, defined as the ratio of capital input to capital stock, reflects the rise in the relative importance of IT equipment and other forms of capital with higher marginal products and higher capital service prices. Capital stock fails to capture these differences in marginal products and has been superseded by a measure of capital input, as recommended in recent Organization for Economic Cooperation and Development (OECD) manuals on measuring capital (Blades 2001) and productivity (Schreyer 2001). Similarly, we have divided the contribution of labor input between the contributions of hours worked and labor quality. The growth of labor quality, defined as the ratio of labor input to hours worked, captures the rise in the relative importance of college-educated workers and other workers with higher marginal products and higher compensation per hour. Hours worked fails to capture these differences in marginal products and has been superseded by a measure of labor input, as recommended in the United Nations’ (1993) System of National Accounts 1993 and Schreyer’s (2001) OECD Productivity Manual. Growth in value added is the sum of growth in hours and growth in average labor productivity (ALP). Table 11.2 presents this decomposition for the period 1977–2000 and shows that hours worked predominate, rising at 1.68 percent per year compared to ALP growth of 1.39 percent per year. Growth of ALP depends on capital deepening, labor quality growth, and the growth of TFP, and we have divided capital deepening between IT and non-IT capital inputs and labor quality among college-educated and noncollege workers and the reallocation of hours between these two categories of workers. The results reveal that the slow growth of labor productivity in the 1970s and 1980s is due to a slump in capital deepening. The growth rate of ALP rises only slightly during the first half of the 1990s as capital deepening continues to slump, labor quality growth remains unchanged, and TFP growth experiences a modest revival. A substantial slowdown in the growth of hours worked generates a further slide in output growth in the early period. Accelerating growth during 1995–2000 reflects a surge in the growth of hours worked of 0.93 percentage points and a similar rise in ALP growth. Table 11.2 shows that this reflects an increase in capital deepening of 0.60 percentage points, two-thirds of the increase in ALP growth, with most of this, 0.51 percentage points, due to a sharp jump in IT-capital deepening after 1995. The 0.40 percentage point increase in TFP growth is partly offset by a drop in the growth rate of labor quality of 0.06 points. This can
408
Dale W. Jorgenson, Mun S. Ho, and Kevin J. Stiroh
Table 11.2
Decomposition of aggregate labor productivity and TFP 1995–2000 1977–2000 1977–90 1990–95 1995–2000 less 1990–95
Average labor productivity Capital deepening IT Non-IT Labor quality College Non-college Reallocation of hours Aggregate TFP
Contributions 1.39 1.13 1.03 0.97 0.58 0.46 0.45 0.51 0.22 0.24 0.02 0.03 0.08 0.04 0.13 0.16 0.14 –0.08
1.28 0.81 0.49 0.32 0.24 –0.01 0.18 0.06 0.23
2.21 1.40 1.00 0.41 0.17 0.01 0.06 0.10 0.63
0.93 0.60 0.51 0.09 –0.06 0.02 –0.13 0.04 0.40
Productivity and reallocations Aggregate TFP 0.14 –0.08 0.23 Reallocation of capital input 0.02 0.03 –0.02 IT –0.01 0.00 –0.03 Non-IT 0.03 0.03 0.01 Reallocation of labor input –0.04 –0.02 –0.09 College –0.01 –0.01 –0.01 Non-college –0.03 –0.01 –0.08 Domar-weighted productivity 0.16 –0.09 0.35 IT 0.27 0.21 0.30 Non-IT –0.11 –0.30 0.05
0.63 0.03 –0.02 0.05 –0.03 –0.01 –0.03 0.63 0.41 0.22
0.40 0.05 0.01 0.04 0.06 0.00 0.05 0.28 0.11 0.17
5.04 0.84 1.99
2.92 1.07 0.93
Addendum Aggregate production function value-added Reallocation of value-added Hours growth
3.11 0.04 1.68
2.75 –0.17 1.80
2.12 –0.22 1.07
Notes: All figures are average annual percentages. The contribution of an output or input is the growth rate multiplied by the average value share. IT value added includes production in SIC #357, #366, #367, and #737. IT in capital input includes computer hardware, computer software, and telecommunications equipment. Aggregate TFP is from the production possibility frontier and Domar-weighted productivity is from aggregation across sectors (defined in text).
be traced to a decline in quality growth for noncollege workers, associated with the surge in employment after 1995. Table 11.2 also divides growth in TFP into components associated with IT-producing and non-IT-producing industries. As expected, this reveals a gradual rise in the contribution of IT-producing industries that has continued throughout our sample. Growth of TFP in non-IT industries is small and negative on average, which reflects the offsetting effects of industries with positive and negative TFP growth over this period. Since 1995, however, non-IT TFP growth has accelerated, which may reflect investments in intangible assets like unmeasured human capital, workplace practices, new business models, structural capital, and organizational cap-
Investments in Information Technology and Higher Education
409
ital.4 Two additional components of aggregate TFP growth that capture differences in marginal products of inputs in different industries are reallocations of capital and labor inputs among industries. For the period as a whole these reallocations are relatively small, but reallocations of labor input fluctuate considerably. Finally, as an addendum in table 11.2, we have compared growth in economywide value added estimated from two industry-level aggregation approaches—the production possibility frontier and the aggregate production function. The difference between these two measures is the reallocation of value added, which, as we will discuss in more detail, captures differences in the price of a unit of value added. If the production possibility frontier reduces to an aggregate production function, this price must be the same for all industries. While the differences were modest in size for the period 1948–79 covered by Jorgenson, Gollop, and Fraumeni (1987), the results in table 11.2 reveal that these differences are no longer small. For the period 1995–2000, the reallocation of value added reaches nearly a full percentage point. The wide gap between the production possibility frontier and the aggregate production function can largely be attributed to the more detailed classification of IT-producing industries employed in our study. Differences in the growth of value-added prices between these industries and non-IT industries create a fixed-weight bias.5 This divergence shows the critical importance of accounting for relative price changes through an index number approach like that currently utilized in the U.S. national income and product accounts (NIPAs). We conclude that the production possibility frontier provides a more appropriate measure of aggregate output when using industry data and has superseded the aggregate production function. The remainder of the paper is organized as follows. Section 11.2 presents our methodology for measuring output and intermediate inputs. The most important feature is a consistent time series of interindustry transactions table that allows us to allocate the sources of U.S. economic growth among industries. In section 11.3 we outline our methods for measuring capital input. Constant quality price indexes for information technology equipment are essential for separating the change in performance of this equipment from the change in price for a given level of performance. The cost of capital is the key concept for capturing the economic impact of information technology prices. Section 11.4 outlines our methods for measuring labor input. These incorporate differences in hours worked and wages for workers who differ in age, sex, and, most important, educational attainment. 4. See Abowd et al. (chap. 5 in this volume), Lev and Radhakrishman (chap. 3 in this volume), and Black and Lynch (chap. 6 in this volume) for overviews of these topics. 5. The aggregate production function uses fixed weights because the value-added prices are assumed to be the same.
410
Dale W. Jorgenson, Mun S. Ho, and Kevin J. Stiroh
In section 11.5 we outline a framework for aggregating output, capital, labor, and intermediate inputs, and productivity over industries. This framework was introduced by Jorgenson, Gollop, and Fraumeni (1987). A key role is played by a weighting scheme proposed by Domar (1961), based on the relative importance of each industry in value added, as well as the relative importance of value added in the industry’s output. The Domar weighting scheme captures the impact of sources of growth at the industry level, both in the industry where growth occurs and in the industries that purchase the output of this industry. In section 11.6 we present an analysis of the sources of U.S. economic growth at the industry level. The contributions of capital and labor inputs and gains in economywide TFP presented in tables 11.1 and 11.2 reflect the evolution of the production structure of all industries in the U.S. economy. We aggregate over industries in order to show how changes in the production structure at the industry level cumulate to determine economywide economic growth. We focus special attention on the role of TFP growth in the IT-producing industries, investments in IT equipment and software, and services of the human capital of knowledge workers. Section 11.7 concludes the paper. 11.2 Measuring Output, Intermediate Input, and Value Added This section describes our methodology for measuring industry outputs, intermediate inputs, and value added. This methodology uses a time series of input-output (I-O) tables and was introduced by Jorgenson, Gollop, and Fraumeni (1987, chap. 5). The results were extended and updated by Jorgenson (1995b). Here we describe the further revisions of these studies. 11.2.1 Notation Yj PY, j PYT,i Xi, j P Xi Xj PX, j PK, j PL, j v YCi PYC,i mi Pm,i
quantity of output of industry j price of output to producer in industry j price of output to purchasers from industry j quantity of input i into industry j price of commodity i to buyers index of total intermediate input into industry j price of total intermediate input into industry j price of total capital input to industry j price of total labor input to industry j value shares quantity of domestically produced commodity i price of domestically produced commodity i quantity of imports of commodity I price of imported commodity i
Investments in Information Technology and Higher Education
YSi M j,i
411
quantity of total supply of commodity i Make matrix; value of commodity i made by industry j
11.2.2 Methodology We assume that the production function for industry j has M distinct intermediate inputs and that the function is separable in these inputs, so that (1)
Yj f(Kj , Lj , Xj , T );
Xj X(X1, j , X2, j , . . . XM, j ),
where there are M 44 commodities corresponding to the primary products of the industries listed in table 11.3.6 Under the assumptions of constant returns to scale and competitive markets, the value of output is equal to the value of all inputs (capital, labor, and intermediate): (2)
PY, jYj PK, j Kj PL, j Lj ∑ P Xi Xi, j , i
where we assume that the price of intermediate input P Xi of commodity i is the same for all purchasing sectors. We define the quantity of intermediate input as a Tornqvist index of its components: (3)
ln Xj ∑ vi, j ln Xi, j , i
where the vi, j weights are the average, two-period shares of the components in the value of intermediate input. The price index of intermediate input PX, j is equal to the value of intermediate input, divided by the quantity index Xj . Note that this price is specific to industry j, even if the prices of the component inputs are the same for all industries because the shares of the components differ among industries.7 We require the concept of industry value added for aggregation over sectors in section 11.6. Assuming that the production function is separable in intermediate input and value added, we define industry value added Vj implicitly from the equation (4)
ln Yj (1 vv, j ) ln Xj vv, j ln Vj ,
6. We distinguish forty-four separate inputs. Each of these is an aggregate of many subcommodities, classified at a finer level of detail in the I-O accounts. Each industry purchases a basket of these intermediate inputs. If we were to construct the price of the basket from prices of the subcommodities, we would obtain a different price index for each purchasing industry. Our assumption is that all industries face the same price of intermediate input i. 7. The same subscript i is used in the numerator and denominator of equation (4) to avoid the proliferation of symbols; the different references should be obvious. This applies to similar expressions throughout the paper.
412
Dale W. Jorgenson, Mun S. Ho, and Kevin J. Stiroh
where the vv, j weights are the two-period average shares of value added in gross output. Value added in nominal terms is PV, jVj PK, j Kj PL, j Lj . The price of value added, PV, j , is derived by dividing this nominal value by the quantity index from equation (4). In order to identify the impact of information technology, we isolate the industries that produce IT. In particular, we divide Industrial Machinery and Equipment (SIC 35) into Computers (SIC 357) and Machinery, excluding Computers. Similarly, we divide Electronic and Electrical Equipment (SIC 36) into Electronic Components (SIC 367, which includes semiconductors, SIC 3674), Communications Equipment (SIC 366), and Other Electrical Machinery. Finally, we divide Business Services (SIC 73) into Computer Services (SIC 737, which includes software) and Other Business Services. This breakdown allows us to better identify and analyze the impact of IT production on the U.S. economy. We derive both outputs and intermediate inputs from a time series of interindustry transactions tables. These tables consist of a Use table that allocates the use of each commodity among intermediate inputs and final demand categories and a Make table that allocates the output of each commodity among the industries that produce it. The output of a given commodity by all industries and the input of this commodity by all industries must be equal. In the Use table, the jth column represents industry j, and the ith row represents commodity i. In nominal terms, the sum of the elements in column j is the value of the industry’s output. This is equal to the value of this output to the producer, plus taxes paid on this output by the purchaser Tj : (5)
PYT, jYj PY, jYj Tj ,
where the price received by the seller is PY, j and the price paid by the purchaser is PYT, j . An industry may produce several commodities, and a commodity may be produced by several industries.8 The value of the output of industry j is equal to the value of all the commodities it produces: (6)
PYT, jYj ∑ Mj,i , i
where Mj,i is the value of commodity i produced by j. This implies that (7)
VCi PYC,iYCi ∑ Mj,i , j
where YCi denotes the quantity of domestically produced commodity i and VCi the value. 8. For example, the hotel industry produces the services of both “hotels” and “restaurants.”
Investments in Information Technology and Higher Education
413
We assume that each commodity is an aggregate of the quantities produced in various industries and the price of the ith commodity PYC,i is given by Mj,i ln PYC,i ∑ ln PYT, j . VC j i
(8)
The Use table also includes sales to final demand. This is broken down into the familiar categories of personal consumption expenditures, gross private domestic investment, government purchases, exports, and imports (c, i, g, x, m). The sum of the elements in row i of the Use table is the value of all deliveries of the ith commodity to intermediate and final demand. Thus, the supply-demand balance for commodity i in value terms is (9)
PYC,iYCi ∑ PXi Xi PXi (ci ii gi xi ) Pm,i mi .
We can rewrite this as the total supply from domestic suppliers and imports, which equals total demand: (10)
PYC,iYCi Pm,i mi ∑ P Xi Xi P Xi (ci ii gi xi ).
We assume that all buyers purchase the same basket of commodity i— that is, the same share of the imported variety. The quantity of the total supply of commodity i, YSi , is assumed to be a translog index of the two varieties, and the price is defined to make the value identity hold: (11)
ln YSi vC ln YCi vm ln mi PXi YSi PYC,iYCi Pm,i mi
Note that this price P Xi is the price paid by producers for their input in equation (2). This completes our interindustry accounting system. 11.2.3 Data We next describe the sources and methods for construction of our system of interindustry accounts. The starting point is the official benchmark U.S. Inter-Industry Transactions tables produced by the Bureau of Economic Analysis (BEA). These are available for the years 1977, 1982, 1987, and 1992.9 The Office of Employment Projections of the Bureau of Labor Statistics (BLS) uses these benchmark tables to generate a time series of interindustry transactions tables for 1983–2000. An earlier version covered the period 1977–95.10 These data are combined with the time series of industry outputs, also from the Office of Employment Projections, giving us 9. The latest tables are described in Survey of Current Business, November 1997, p. 36. 10. We are grateful to Charles Bowman, Carl Chentrens, and James Franklin of the BLS for providing these data and patiently explaining the details. The data are available on the Office of Employment Projections website at http://www.bls.gov/emp/.
414
Dale W. Jorgenson, Mun S. Ho, and Kevin J. Stiroh
both values and prices. The major difference between the BEA 1992 benchmark table and the BLS I-O tables is the treatment of software; the BLS includes an explicit estimate of software for each sector. We consolidate and reorganize the BLS interindustry transactions tables from 192 industries to 44 industries.11 The major differences are the following: (a) we treat owner-occupied housing as a direct purchase of capital input by households, rather than a purchase of housing services from the real estate industry; (b) we consolidate privately owned electric utilities with publicly owned utilities in SIC 32, leaving a government enterprises sector that does not include electric utilities; (c) the government industry is simplified by consolidating government labor and capital services into the final demand column for government purchases; (d) we impute the service flow from consumers’ durable assets to obtain the consumption of these capital services and treat these as a direct purchase of capital services by households; (e) we treat nonprofit producers symmetrically with the other producers—our accounts are thus broader in scope than those limited to for-profit business sectors; (f) we have adjusted the tables for 1998–2000 to match the GDP given in the August 2001 Survey of Current Business; the BLS series are based on a previous revision of the NIPAs. For the period 1977–82, prior to the beginning of the current BLS time series of interindustry transactions tables, we use a previous version of these BLS tables for 1977–95, based on the 1987 benchmark tables. We consolidate the earlier tables in a similar manner and link them to the current tables for 1983–2000. To make them comparable, we adjust the 1977– 82 tables to the new values of industry output produced by the BLS Office of Employment Projections, using the method of iterative proportional fitting discussed by Jorgenson, Gollop, and Fraumeni (1987, pp. 72–75). Sectors such as construction, oil mining, and electric utilities are quite different between these two versions, and the previous series are adjusted to match the current one. Our final set of matrices for 1977–2000 is thus consistent with the latest estimates of industry output from the BLS and the annual update of GDP published by the BEA in 2001 (Survey of Current Business, August 2001). The total of all entries in each column of the Use table is consistent with the BLS time series for Industry Output and Employment.12 This data set provides the value of output, the price of output, and employment for the same industries for the period 1977–2000. The exceptions to this are Communications Equipment (BLS industry 53) and Computer and Data Processing Services (BLS industry 147), where we replace the BLS prices with 11. The aggregation process involves reallocating special sectors like scrap, rest of the world, inventory valuation adjustment, general government, and so on. These are reallocated to the forty-four sectors and final demand in accord with both Use and Make matrices. 12. Data were kindly provided by Jay Berman of the BLS. The data are available at http:// www.bls.gov/emp/empind2.htm.
Investments in Information Technology and Higher Education
415
BEA prices that are adjusted for quality change.13 The prices received by producers are obtained by subtracting output taxes from the purchasers’ prices. The BLS I-O data set also includes the final demand categories—c, i, g, x, m—in both current and constant prices. Using information on imports by commodities, we derive the price for imports Pm,i . We calculate a price index for the total supply of commodity i using equation (11). Finally, we derive the prices and quantities of intermediate inputs. The value of input i into industry j is given by the Use matrix. The price P Xi has been calculated, as described above, so that we can derive the quantities of interindustry transactions Xi, j . From equation (3), we calculate the quantity of intermediate input for each industry. A major drawback of the BLS time series is that value added is not divided among capital and labor compensation and indirect business taxes, as in the BEA benchmark tables. To remedy this deficiency we employ GDP by industry data produced by the BEA.14 These data give, for each industry, the components of gross product—compensation of employees, proprietors’ income, corporate profits, indirect business taxes, etc. The value of labor input described in section 11.4 below is the sum of compensation of employees and our estimated value of self-employed labor compensation. The estimation of the value of capital input is described in section 11.3 below and includes property compensation, less the imputed value of self-employed labor compensation, plus certain property taxes. The remainder of GDP after subtracting the value of capital and labor compensation is equal to sales taxes, net of subsidies, Tj . 11.2.4 Issues Breakdown of Value Added A major issue is that the data on GDP by industry produced by BEA are not consistent with the data on value added in the benchmark interindustry transactions tables. We maintain the total of value added for each industry estimated in the BLS tables and allocate it in the same proportions as those in GDP by industry. This gives us values of capital income, labor income, and taxes—PK, j Kj , PL, j Lj , and Tj , respectively—that sum to the BLS values. The quantities of capital and labor input are those derived from the GDP by industry data or the census data described in sections 11.3 and 11.4 below. The prices of capital input and labor input are derived by dividing these estimated values by the quantities derived from GDP by industry.
13. Data are available from the “Gross Output by Detailed Industry” file on the BEA’s website, http://www.bea.gov/bea/dn2/gpo.htm, and described in Lum and Moyer (2000). 14. These data are described in Lum and Moyer (2000). The data are available at http:// www.bea.gov/bea/dn2/gpo.htm.
416
Dale W. Jorgenson, Mun S. Ho, and Kevin J. Stiroh
Frequency of Interindustry Transactions Tables A second difficulty in constructing a time series of interindustry transactions tables is that the time series for GDP by industry, published annually, are not consistent with the benchmark interindustry transactions tables published every five years. The magnitude and sources of the discrepancies are discussed in BEA (1997). Consistency over Time The 1977 and 1982 benchmark tables are based on the 1972 SIC, while 1987 and 1992 tables are based on the 1987 SIC. Similarly, the annual GDP by industry data have a break in 1987. Data for GDP by industry are provided for both classifications in 1987, and we have adjusted the old classification to the new one, using the shares of each of the old industries in the new industries for 1987. For the interindustry transactions tables we apply the method of iterative proportional fitting to the matrices based on the 1972 SIC, using industry outputs based on the 1987 SIC. Our estimates of industry output and commodity prices are based on the BLS industry output and import prices. We assume that all buyers pay the same price for each commodity, reflecting the absence of data on prices for different purchasers. 11.2.5 Results A list of the industries included in our study is given in table 11.3, together with the definition of each industry in the SIC. Industries 6–24 comprise manufacturing, while Industries 25–41 make up services. We have subdivided government among government enterprises (excluding publicly owned electric utilities), public education, and general government (excluding public education). The remaining sectors are agriculture, three mining sectors, construction, and the household sector. Table 11.3 gives output, intermediate input, and value added for all forty-four industries in the year 2000. Households and general government have no intermediate inputs, so that output is equal to value-added. Table 11.4 gives growth rates of output, intermediate input, and value added for all forty-four sectors for the period 1977–2000. Output growth is most rapid for the IT-producing industries—Computers, Communications, Electronic Components, and Computer Services. Growth of intermediate input is also most rapid for these sectors. Finance, Other Business Services, Communications, and Professional and Social Services—all intensive users of IT—comprise the next echelon of four industries in terms of output growth. The IT sectors also lead in growth of intermediate input, but Finance has a slight edge over Communications Equipment in the growth of value added. Table 11.5 presents growth rates of output for all forty-four sectors for
Table 11.3
Industry output, intermediate input, and value added, 2000 Intermediate input
Valueadded
Code/Industry name
Output
1 2 3 4 5 6 7 8 9 10 11
388,994 32,824 23,081 131,487 995,279 203,714 111,040 192,553 279,540 329,307
193,213 16,711 8,907 63,982 576,079 121,015 57,519 130,936 154,000 173,745
195,781 16,113 14,175 67,505 419,200 82,699 53,522 61,617 125,540 155,561
01, 02, 07–09 10, 14 12 13 15–17 24–25 32 33 34 35 X357
142,803 147,945 101,430 183,792 417,709
104,860 86,083 53,270 97,991 344,636
37,943 61,862 48,160 85,802 83,072
357 36 X366–367 366 367 371 372–379
186,241
99,120
87,121
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40
Agriculture Non-energy mining Coal mining Oil and gas mining Construction Lumber, wood, furniture Stone, clay, glass Primary metal Fabricated metal Machinery, excl. computers Computers and office equipment Other electrical machinery Communications equipment Electronic components Motor vehicles Other transportation equipment Instruments and miscellaneous mfg. Food and tobacco Textiles, apparel, leather Paper Printing and publishing Chemicals Petroleum refining Rubber and plastic Transportation Communications Electricity Gas Wholesale trade Retail and eating Finance Insurance Real estate (rental) Computer services Business svc., excl. computer Health, private Legal Education, private Professional and social svcs. Other services
236,815 528,328 156,160 175,955 233,512 422,654 242,918 170,270 553,535 430,330 245,658 81,196 836,346 1,132,266 915,355 341,940 829,689 308,442 476,667 833,500 167,952 130,847 902,500 716,988
109,769 357,205 97,780 103,013 95,800 239,216 208,722 92,810 290,201 199,302 79,331 54,775 301,544 476,991 385,906 191,689 191,795 120,124 118,796 283,836 49,533 54,004 301,107 330,525
127,046 171,122 58,380 72,942 137,712 183,438 34,196 77,459 263,335 231,027 166,326 26,421 534,801 655,275 529,449 150,251 637,894 188,317 357,871 549,663 118,418 76,843 601,393 386,463
41 42 43 44
Government enterprises Household Government, excl. education Government education
253,696 1,394,410 747,471 400,312
88,546 0 0 0
165,149 1,394,410 747,471 400,312
12 13 14 15 16 17
SIC code
38–39 20–21 22–23, 31 26 27 28 29 30 40–47 48 491, %493 492, %493, 496 50–51 52–59 60–62, 67 63–64 65 737 73 X737 801–806 81 82 832–829 494–497, 701–729, 750–799 88
Notes: All figures in millions of current dollars. % indicates part of an SIC code. X indicates excludes SIC code.
Table 11.4
Industry output, intermediate input, and value-added growth, 1977–2000
Industry name Agriculture Non-energy mining Coal mining Oil and gas mining Construction Lumber, wood, furniture Stone, clay, glass Primary metal Fabricated metal Machinery, excl. computers Computers and office equipment Other electrical machinery Communications equipment Electronic components Motor vehicles Other transportation equipment Instruments and miscellaneous mfg. Food and tobacco Textiles, apparel, leather Paper Printing and publishing Chemicals Petroleum refining Rubber and plastic Transportation Communications Electricity Gas Wholesale trade Retail and eating Finance Insurance Real estate (rental) Computer services Business svc., excl. computer Health, private Legal Education, private Professional and social svcs. Other services Government enterprises Household Government, excl. education Government education Average
Output
Intermediate input
Value-added
2.40 1.53 1.89 –0.32 1.65 2.00 1.65 0.22 1.98 1.40 23.70 2.28 8.73 18.15 2.60 2.32 3.25 1.60 0.33 1.64 2.06 1.45 0.47 3.61 2.76 4.48 2.17 –1.87 4.33 2.65 7.72 2.29 2.74 13.52 5.16 3.18 1.86 2.45 4.36 3.29 2.46 3.69 1.38 2.13
0.73 1.05 1.25 –0.12 2.66 1.91 1.75 –0.38 2.07 2.43 14.99 2.92 10.01 11.02 3.11 3.28 4.51 1.28 –0.33 1.53 2.42 1.47 1.23 2.37 2.80 4.47 2.12 –1.81 2.49 2.90 7.60 4.15 2.83 15.94 3.24 4.67 1.55 2.80 4.36 3.57 3.03 0.00 0.00 0.00
4.74 1.89 2.88 0.12 0.40 2.21 1.55 1.61 1.87 0.33 42.01 1.40 7.36 25.44 0.94 1.38 2.23 2.51 1.49 1.80 1.73 1.44 1.26 5.23 2.77 4.47 2.01 –4.40 5.35 2.48 7.74 0.50 2.71 12.00 6.30 2.47 2.61 2.23 4.42 3.13 2.13 3.69 1.38 2.13
3.58
3.41
4.09
Note: All figures are average annual growth rates.
Table 11.5
Growth of industry output, by subperiod
Industry name Agriculture Non-energy mining Coal mining Oil and gas mining Construction Lumber, wood, furniture Stone, clay, glass Primary metal Fabricated metal Machinery, excl. computers Computers and office equipment Other electrical machinery Communications equipment Electronic components Motor vehicles Other transportation equipment Instruments and miscellaneous mfg. Food and tobacco Textiles, apparel, leather Paper Printing and publishing Chemicals Petroleum refining Rubber and plastic Transportation Communications Electricity Gas Wholesale trade Retail and eating Finance Insurance Real estate (rental) Computer services Business svc., excl. computer Health, private Legal Education, private Professional and social svcs. Other services Government enterprises Household Government, excl. education Government education Average
1977–2000
1977–90
1990–95
1995–2000
2.40 1.53 1.89 –0.32 1.65 2.00 1.65 0.22 1.98 1.40 23.70 2.28 8.73 18.15 2.60 2.32 3.25 1.60 0.33 1.64 2.06 1.45 0.47 3.61 2.76 4.48 2.17 –1.87 4.33 2.65 7.72 2.29 2.74 13.52 5.16 3.18 1.86 2.45 4.36 3.29 2.46 3.69 1.38 2.13
2.25 1.37 3.04 –0.56 1.22 1.67 0.31 –1.74 0.13 –0.62 21.65 1.08 6.07 12.32 –0.08 3.89 3.47 1.47 0.01 1.98 2.90 1.20 0.05 3.03 2.45 3.79 2.27 –2.20 4.47 2.30 7.63 3.30 3.01 14.38 4.66 3.75 2.78 2.72 5.02 2.83 2.40 3.59 2.04 2.04
1.87 1.67 –0.01 –1.13 –0.07 1.59 1.19 2.49 3.18 4.16 21.19 4.10 9.84 22.80 6.08 –5.43 1.76 2.01 2.57 1.62 –0.05 1.36 0.53 5.28 3.55 4.34 1.19 –2.24 4.14 2.03 5.65 0.48 2.38 11.45 4.05 2.22 –0.61 1.32 2.34 3.36 1.63 2.76 0.19 2.36
3.33 1.80 0.80 1.10 4.48 3.25 5.60 3.04 5.58 3.88 31.50 3.56 14.52 28.65 6.07 5.99 4.18 1.52 –1.07 0.79 1.97 2.20 1.50 3.45 2.81 6.41 2.87 –0.65 4.16 4.17 10.00 1.48 2.40 13.38 7.58 2.66 1.95 2.87 4.67 4.42 3.44 4.85 0.86 2.15
3.58
3.17
3.21
5.01
Note: All figures are average annual growth rates.
420
Dale W. Jorgenson, Mun S. Ho, and Kevin J. Stiroh
1977–90, 1990–95, and 1995–2000. Output growth is most rapid for the ITproducing industries during the 1990s, but the growth of Finance outpaces that of Communications Equipment before 1990. The second echelon of four industries in at least one of the three subperiods includes six industries, indicating the range of industries with relatively rapid growth. Table 11.6 gives growth rates of intermediate input for the three subperiods. Again, growth is most rapid for the IT-producing industries during the 1990s, but Finance outstrips Communications Equipment before 1990. The second echelon of four industries in at least one of the three subperiods includes eleven industries, indicating the widespread importance of intermediate inputs. Finally, table 11.7 shows growth rates of value added for the three subperiods. The growth of value added in the IT-producing sectors is strong for all three subperiods, but Petroleum Refining leads Communications Equipment and Computer Services during 1990–95 and Finance leads Communications Equipment during 1995–2000. The second echelon of four industries in at least one subperiod includes ten industries, showing the range of industries with relatively rapid growth in value added. Our overall conclusion from these results is that the IT-producing sectors stand out in terms of rapid growth of output, intermediate input, and value added for the period 1977–2000 as a whole and for the three subperiods. Finance surpasses Communications Equipment in growth of value added for the period as a whole and a number of industries have more rapid growth of intermediate input and/or value added during one or more subperiods. Nonetheless, the picture that emerges from these data is one of rapid growth of the IT-producing industries for the period as a whole and accelerating growth during the period 1995–2000. 11.3 Measuring Capital Input This section outlines our methodology for measuring the flow of capital services in each industry. The key objective is to account for substitution among assets with different marginal products. The methodology was originated by Jorgenson and Griliches (1995), who constructed price and quantity indexes for capital input, based on Jorgenson’s (1996) model of the corporate cost of capital. These indexes were extended to the industry level for all three legal forms of organization—corporate, noncorporate, and household—by Jorgenson, Gollop, and Fraumeni (1987, chap. 4) and updated and revised by Jorgenson (1995b). We incorporate recent methodological changes developed by Jorgenson and Stiroh (2000). These include the use of asset-specific revaluation terms in the service price equation. In addition, capital service flows from new investments are assumed to become available in the middle of the year, rather than at the end of the year, as in our earlier work.
Table 11.6
Growth of industry intermediate input, by subperiod
Industry name Agriculture Non-energy mining Coal mining Oil and gas mining Construction Lumber, wood, furniture Stone, clay, glass Primary metal Fabricated metal Machinery, excl. computers Computers and office equipment Other electrical machinery Communications equipment Electronic components Motor vehicles Other transportation equipment Instruments and miscellaneous mfg. Food and tobacco Textiles, apparel, leather Paper Printing and publishing Chemicals Petroleum refining Rubber and plastic Transportation Communications Electricity Gas Wholesale trade Retail and eating Finance Insurance Real estate (rental) Computer services Business svc., excl. computer Health, private Legal Education, private Professional and social svcs. Other services Government enterprises Household Government, excl. education Government education Average
1977–2000
1977–90
1990–95
1995–2000
0.73 1.05 1.25 –0.12 2.66 1.91 1.75 –0.38 2.07 2.43 14.99 2.92 10.01 11.02 3.11 3.28 4.51 1.28 –0.33 1.53 2.42 1.47 1.23 2.37 2.80 4.47 2.12 –1.81 2.49 2.90 7.60 4.15 2.83 15.94 3.24 4.67 1.55 2.80 4.36 3.57 3.03 0.00 0.00 0.00
0.10 0.87 4.85 1.43 2.14 0.89 0.14 –1.91 0.59 0.54 13.48 1.22 4.53 6.81 0.37 4.41 3.56 1.13 –0.54 2.31 3.74 1.41 0.26 1.67 2.82 2.88 2.85 –2.21 1.32 2.92 6.79 6.02 3.03 15.34 0.94 4.96 0.48 2.64 3.95 2.45 2.90 0.00 0.00 0.00
1.00 0.37 –4.01 –4.19 0.99 4.02 1.55 2.81 2.72 4.87 14.73 5.12 14.24 17.19 6.99 –5.03 4.45 1.50 2.66 2.86 0.40 1.64 –1.52 4.90 2.87 4.76 0.83 –0.61 4.01 2.27 6.01 0.85 1.62 14.35 3.94 4.84 1.05 2.18 3.24 4.77 3.05 0.00 0.00 0.00
2.11 2.19 –2.82 –0.08 5.66 2.46 6.12 0.40 5.25 4.91 19.21 5.17 20.04 15.81 6.32 8.63 7.06 1.45 –2.80 –1.82 1.01 1.43 6.51 1.67 2.66 8.33 1.51 –1.99 4.02 3.47 11.31 2.58 3.49 19.08 8.49 3.77 4.84 3.84 6.56 5.27 3.35 0.00 0.00 0.00
3.41
2.78
3.42
5.04
Note: All figures are average annual growth rates.
Table 11.7
Growth of industry value added, by subperiod
Industry name Agriculture Non-energy mining Coal mining Oil and gas mining Construction Lumber, wood, furniture Stone, clay, glass Primary metal Fabricated metal Machinery, excl. computers Computers and office equipment Other electrical machinery Communications equipment Electronic components Motor vehicles Other transportation equipment Instruments and miscellaneous mfg. Food and tobacco Textiles, apparel, leather Paper Printing and publishing Chemicals Petroleum refining Rubber and plastic Transportation Communications Electricity Gas Wholesale trade Retail and eating Finance Insurance Real estate (rental) Computer services Business svc., excl. computer Health, private Legal Education, private Professional and social svcs. Other services Government enterprises Household Government, excl. education Government education Average
1977–2000
1977–90
1990–95
1995–2000
4.74 1.89 2.88 0.12 0.40 2.21 1.55 1.61 1.87 0.33 42.01 1.40 7.36 25.44 0.94 1.38 2.23 2.51 1.49 1.80 1.73 1.44 1.26 5.23 2.77 4.47 2.01 –4.40 5.35 2.48 7.74 0.50 2.71 12.00 6.30 2.47 2.61 2.23 4.42 3.13 2.13 3.69 1.38 2.13
5.44 1.69 2.77 –1.34 0.13 2.98 0.48 –1.29 –0.43 –1.78 35.12 0.91 7.33 17.43 –1.39 3.37 3.40 2.59 0.93 1.41 2.20 0.91 5.86 4.87 2.13 4.50 1.65 –6.09 6.23 1.87 8.11 0.83 3.00 13.83 6.78 3.19 4.78 2.83 5.64 3.24 2.11 3.59 2.04 2.04
3.03 2.91 2.89 1.77 –1.45 –2.02 0.80 1.71 3.77 3.40 36.40 2.71 5.94 28.74 2.84 –5.84 –0.34 3.13 2.42 –0.45 –0.42 1.00 14.14 5.76 4.20 4.02 1.38 –6.61 4.20 1.86 5.41 0.03 2.61 9.66 4.08 0.97 –1.23 0.72 1.93 2.23 0.84 2.76 0.19 2.36
4.64 1.40 3.13 2.28 2.92 4.43 5.05 9.06 5.97 2.74 65.54 1.36 8.85 42.94 5.07 3.44 1.76 1.66 2.04 5.03 2.65 3.24 –23.57 5.65 2.98 4.83 3.56 2.18 4.24 4.68 9.10 0.10 2.08 9.60 7.27 2.10 0.79 2.17 3.75 3.73 3.49 4.85 0.86 2.15
4.09
3.77
3.51
5.50
Note: All figures are average annual growth rates.
Investments in Information Technology and Higher Education
423
11.3.1 Notation We begin with notation for measures of investment, capital stocks, and capital services for individual assets and industry aggregates. The subscript k refers to the specific asset, while j refers to the industry; time subscripts are suppressed wherever possible. For individual assets we have Ik, j PI,k, j k Zk, j PI, k, j Kk, j PK,k, j
quantity of investment price of investment geometric depreciation rate quantity of capital stock price of capital stock quantity of capital services price of capital services
For industry aggregates: Ij PI, j Zj PZ, j Kj PK, j QK, j
quantity index of industry investment price index of industry investment quantity index of industry capital stock price index of industry capital stock quantity index of industry capital services price of industry capital services index of industry capital quality
11.3.2 Methodology for Estimating Capital Service Flows For each industry, we begin with data on the quantity of investment in each individual asset Ik, j . We assume that the price index for each asset transforms nominal investment in different time periods into identical “efficiency units” over time, so that investments of different vintages are perfect substitutes in production. Improvements in the performance of capital input—for example, a faster processor in a computer—are incorporated into the price index that transforms the current vintage of investment into an equivalent number of efficiency units of earlier vintages. As a concrete example, the constant-quality price index for computer equipment transforms more recent investments in faster, more powerful, computers into additional units of constant efficiency base-year capital. We transform data on the quantities of investment into estimates of capital stocks for all assets, industries, and years through the familiar perpetual inventory method. This is consistent with the assumption of perfect substitutability across vintages and defines the capital stock for each industry and asset as
(12)
Zk, j,t Zk, j,t1(1 k ) Ij,k,t ∑(1 k)Ij,k,t , 0
424
Dale W. Jorgenson, Mun S. Ho, and Kevin J. Stiroh
where the efficiency of an asset is assumed to decline geometrically with age at the rate k. Equation (12) has the interpretation that capital stock is a weighted sum of past investments, where the weights are derived from the relative efficiencies of capital of different ages, captured by the geometric rate of decline. Note that the rates of decline in efficiency k are indexed by asset only. Finally, because all capital is measured in base-year efficiency units, the appropriate price for valuing the capital stock is the investment price deflator PI, k . The installed stock of capital Zk, j represents the accumulation of past investments, but we are primarily interested in Kk, j , the flow of capital services from that stock over a given period. This distinction is not critical for individual assets, but it becomes essential when we aggregate heterogeneous assets to form an industry or economywide aggregate. For each asset we assume that new investment becomes available for production at the midpoint of the year so the flow of capital services for each industry and each asset is proportional to the arithmetic average of the current and lagged capital stock: (13)
Kk, j,t qk, j (0.5 Zj, k,t 0.5 Zj,k,t1),
where qk, j denotes the proportionality constant, normalized to one. We estimate a price of capital services that corresponds to the quantity of capital input via the cost-of-capital formula. In equilibrium, with no uncertainty, investors are indifferent between two alternatives: earning a nominal rate of return it on a different investment or buying a unit of capital, collecting a rental fee, and then selling the depreciated asset in the next period. For investors purchasing the asset, the cost of capital equals the marginal product of the asset. This implies the familiar cost of capital, or user cost, for each asset in each industry: (14)
PK,k, j,t (ij,t k, j,t )Pk, j,t1 k PI,k, j,t ,
where the asset-specific capital gains term is k, j,t (PI,k, j,t – PI,k, j,t–1)/PI,k, j,t–1 and ij,t is the nominal rate of return in industry j. The cost of capital accounts for the nominal rate of return, asset-specific depreciation, and an asset-specific revaluation term. An asset with a higher depreciation rate has a higher marginal product and must receive a higher capital service price as compensation. Similarly, if an investor expects a capital loss ( k, j,t 0), then a higher service price is required. Jorgenson and Stiroh (2000) and Oliner and Sichel (2000) discuss the importance of incorporating asset-specific revaluation terms for information technology assets experiencing rapid downward revaluations. Tax considerations are also a key component of capital service prices, as discussed by Hall and Jorgenson (1996) and developed in detail by Jorgenson and Yun (2001). We follow Jorgenson and Stiroh (2000) and Jorgenson
Investments in Information Technology and Higher Education
425
and Yun in accounting for investment tax credits, capital consumption allowances, the statutory tax rate, property taxes, debt/equity financing, and personal taxes, by estimating an asset-specific, after-tax real rate of return for each asset in each industry, rk, j,t , that enters the cost-of-capital formula: 1 ITCk,t t zk,t (15) PK,k, j,t (rk, j,t PI,k, j,t1 k PI,k, j,t ) p PI,k, j,t1, 1 t where ITCk,t is the investment tax credit, t is the statutory tax rate, zk,t is the present value of capital consumption allowances for tax purposes, and p is a property tax rate, all for asset k at time t. For the corporate sector, the rate of return, rk, j,t , is calculated as a weighted average of real, after-tax returns to debt and equity. We then assume the after-tax rate of return to all assets in each industry is the same and exhaust the value of payments to capital across all assets in the corporate sector of each industry. Jorgenson and Yun provide details about the procedure and differences across the corporate, noncorporate, and household sectors. Equations (12) through (15) describe our estimation procedure for the capital service flow and capital service price, Kk, j,t and PK,k,j,t , respectively, for each asset, industry, and time period. We combine capital services for all assets within an industry by means of a Tornqvist quantity index as (16)
ln Kj ∑ vk, j ln Kk, j , k
where the vk, j weights are the two-period average shares of each type of capital income in total capital income and the corresponding price index of capital inputs PK, j is defined implicitly to make the value identity hold: (17)
PK, j Kj ∑ PK,k, j Kk, j . k
Similarly, the quantity of capital stock is defined by (18)
ln Zj ∑ w k, j ln Kk, j , k
where the w k, j weights are the two-period average nominal shares of each type of capital stock in total capital stock and the corresponding price index for capital stock PZ, j,t is defined implicitly from the identity (19)
PZ, j,t Zj,t ∑ PI,k, j,t Kk, j,t . k
Finally, we define capital quality QK, j for industry j as the ratio of capital input to capital stock: (20)
Kj QK, j . Zj
426
Dale W. Jorgenson, Mun S. Ho, and Kevin J. Stiroh
Note that quantity indexes of capital inputs and capital stocks aggregate the same asset quantities Kk, j by means of a Tornqvist index; the difference is the use of service prices versus asset prices as weights. This implies that growth in capital quality reflects substitution toward assets with relatively high service prices and high marginal products. For example, large depreciation rates and rapid downward revaluations for computers imply that these assets have high marginal products, so their weight in an index of capital services exceeds their weight in an index of capital assets. Substitution toward computers as computer prices fall implies that capital input grows faster than capital stock; this is captured by our index of capital quality. Second, our measure of industry capital stock is implicitly a two-period average, as it aggregates two-period averages of capital stocks using current asset prices. Finally, we break down IT capital and non-IT capital. The capital services of IT assets KIT, j include the service flows from computer hardware, software, and communications equipment, while the non-IT capital service flow KNON, j includes the services from all other equipment, structures, inventories, and land. We create subindexes of capital services as (21)
ln KIT, j ∑ vIT,k, j ln KIT, j kIT
ln KNON, j ∑ vNON,k, j ln KNON, j , kIT
where the shares are those of IT capital and non-IT capital, respectively. 11.3.3 Data Our primary data source for capital input is the Tangible Wealth Survey produced by the BEA and described in Herman (2000a,b). These data include detailed investment by industry and asset class and contain historical cost investment and chain-type quantity indexes for sixty-one types of nonresidential assets from 1901 to 2000, forty-eight types of residential assets from 1901 to 2000, and thirteen different types of consumers’ durables from 1925 to 2000. Nonresidential investment is available for sixty-two industries, which we collapse into forty-four industries, including private households and governments. Several data adjustments are worth mentioning. First, for each of the forty-one private industries, we reclassify the BEA data on sixty-one nonresidential assets into fifty-two nonresidential assets. The residential assets and thirteen consumers’ durable assets are allocated to the real estate and household industries. Second, we combine investment in residential equipment with other equipment in the nonresidential category. Third, we control the total value of investment in major categories—equipment and software, nonresidential structures, residential structures, and total consumer durables—to match the totals in the NIPA. These adjustments lead
Investments in Information Technology and Higher Education
427
to a complete time series of fifty-seven assets in both current and chained 1996 dollars for each of the forty-four industries. The investment and capital data are then allocated across three ownership categories—corporate, noncorporate, and households—based on shares provided by the BEA. There is no ownership breakdown for household and government industry data. Finally, we use prices for each asset from the NIPA. Geometric depreciation rates for the perpetual inventory calculations are based on Fraumeni (1997) with the exceptions of computers and automobiles. As described in Jorgenson and Stiroh (2000), the BEA reports nongeometric profiles for these assets, and we estimate best geometric approximation of 0.315 for computers and software and 0.272 for automobiles. We have collected data on inventories and land to complete our capital estimates. Inventory data come primarily from the NIPA in the form of farm and nonfarm inventories, allocated across industries. These are measured in current dollars with a corresponding price index. Inventories are assumed to have a depreciation rate of zero and do not qualify for an investment tax credit or capital consumption allowances for tax purposes, so the capital rental price formula is a simplified version of equation (15). Data on land are more problematic. Through 1995 the Federal Reserve Board (1995, 1997) published detailed data on land values and quantities in its “Balance Sheets for the U.S. Economy,” but the underlying data became unreliable and are no longer published. We use the limited land data available in the “Flow of Funds Accounts of the United States” and historical data described in Jorgenson (1995b) to estimate a price and a quantity of private land. As a practical matter, this quantity series varies very little, so its major impact is to slow the growth of capital stock and capital input. Like inventories, the depreciation, investment tax credit, and capital consumption allowance for land are zero. Finally, we have collected data on industry-level value added from the “Gross Product Originating” database maintained by the BEA and described in Lum and Moyer (2000) and from the BLS, as described above. The BEA data include the value of output, value added, and intermediate inputs for sixty-two industries from 1947 to 2000. We have aggregated these industries to match our forty-four industries. For the estimation of capital service prices, we begin with value added in the corporate sector and subtract corporate labor income to estimate the amount of income available to corporate capital. Similarly, we subtract noncorporate labor income from value added, as described in the following section, to obtain the value of noncorporate capital income. 11.3.4 Issues Although the methodology described above conforms to the international standards recommended by Blades (2001) and Schreyer (2001) of the OECD, there are important issues related to the availability of data and
428
Dale W. Jorgenson, Mun S. Ho, and Kevin J. Stiroh
the plausibility of certain assumptions that arise in the implementation of this methodology. This subsection outlines these issues and describes our solutions. Negative Service Prices The intuition behind our estimation of capital service prices is that the value of capital service flows must exhaust capital income. One problem is that there is very little income available to capital in some years and this leads to negative estimates of service prices. For example, if asset inflation rates are high and depreciation rates are low, relative to the interest rate, negative service prices may result. Economically, this is possible and suggests that capital gains are higher than expected, so that a small service price is possible in equilibrium. Berndt and Fuss (1986) and Diewert (chap. 12 in this volume) discuss this issue in more detail. Empirically, however, negative service prices make aggregation difficult, so we have made adjustments for several assets. Certain assets showed negative service prices, notably inventories, land, and structures in the 1970s, when inflation was high. Our first adjustment of the data was to use a smoothed inflation rate from the surrounding years rather than the current inflation in the cost of capital calculation. Land showed large capital gains throughout and has no depreciation, so that we used the economywide rate of asset inflation for all years. As a last resort, we were forced to impute additional income, which added to both capital income and the value of output, but this is a relatively small issue empirically. Deflators The methodology described above allows for deflators for each asset to differ across industries. In practice, however, the industry-specific deflators for certain assets were quite erratic across industries and time. The Tangible Wealth Survey reports investment in millions of dollars with no decimal points, so any asset with a very small value and quantity in a given year has a deflator that is not estimated precisely. We simply aggregated each asset across all industries and used the economywide average deflator to remove the noise. A second concern is our reliance on the deflators incorporated in the Tangible Wealth Survey and the NIPAs. For most assets this is not a problem, but Jorgenson and Stiroh (2000) argue that the official deflators do not fully account for quality change in software and communications equipment. For these assets only a small share of investment is associated with constant-quality deflators, leading to an overstatement of price increases and an understatement of quantity increases. The software and communications equipment deflators, for example, rose at annual rates of 1.1 and 1.8 percent per year for 1959 to 1999, compared to a decline of 18.8 for computer hardware. We do not address this issue here but simply observe that these data may understate the growth of IT capital input in all industries.
Investments in Information Technology and Higher Education
429
Depreciation Rates There has been considerable debate about the appropriate depreciation rates for assets with constant-quality deflators. As pointed out by Oliner (1993, 1994) and, more recently, by Whelan (2002), if the quantity of investment is constructed with a constant-quality deflator, the depreciation rate should be obtained from constant-quality price data by age of asset. This corresponds to “partial depreciation” in Oliner’s terminology. Otherwise, the cost-of-capital formula in equation (14) would count depreciation twice: once through the asset-specific inflation rate and again through the asset-specific depreciation rate. This is a major issue for computer hardware. The BEA depreciation rates currently incorporate Oliner’s (1993, 1994) estimates for all computer components except personal computers. According to Herman (2000a), the personal computer depreciation profile is based on a study by Lane (1999). Our best geometric approximation to the BEA depreciation rates, however, does not use this new estimate; rather, we employ the earlier BEA profile based on the work of Oliner (1993, 1994). Capital Use Matrix The detailed investment data by industry and assets are based on a relatively old, and perhaps outdated, view of the composition of industry investment. Investment for a given asset is allocated across industries, based on a capital flow table produced by the BEA (see Bonds and Aylor 1998). This capital-use matrix is estimated for benchmark interindustry transaction tables at five-year intervals. The latest capital-use matrix incorporated here is from 1992, and more recent data would obviously be very valuable. Capital for IT-Producing Industries The Tangible Wealth data produced by the BEA report capital data for roughly two-digit SIC industries but provide no breakdown for the ITproducing industries in which we are particularly interested. We have capital data for industries like Commercial and Industrial Machinery (SIC 35) but not for the detailed components like Computers and Office Equipment (SIC 357). This limitation forced us to use a variety of data sources to estimate the capital service flow for the IT-producing industries. The detailed BLS industry value-added data allow us to split the broad industry value added into component industries. Similarly, we combine detailed data from the Current Population Survey (CPS) and decennial Census (discussed below) with BLS and NIPA employment data to estimate total labor compensation for component industries. The value of capital income for each component industry is obtained by dividing broad industry capital income using the implied capital shares from the component industry value added and labor compensation. Finally, we assumed that all components of the broad industry faced the same capital service and capital stock prices, which were estimated as described above based on
430
Dale W. Jorgenson, Mun S. Ho, and Kevin J. Stiroh
two-digit SIC data available in the Tangible Wealth survey. The quantity of capital services was then calculated as the estimated capital income for each component industry divided by the common service price. Similarly, capital quality was assumed constant across detailed industries within the broader two-digit industries, which allowed estimates of capital stock to be retrieved. While this approach is not ideal, it makes the best use of the limited data available. We have estimates of the value of capital income, but if the ITproducing industries are investing in a different mix of assets—for example, more heavily in IT-related assets—then this approach will misstate the growth of capital input and the TFP residual. While it seems likely that this will understate capital input and overstate TFP growth, we cannot gauge the magnitude of the bias without better data on investment patterns for these industries. Corporate and Noncorporate Capital Services The final issue is the allocation of capital between the corporate and noncorporate sectors; this is important because of the differential taxation of corporate income (Jorgenson and Yun 2001). As discussed above, BEA data permit us to split the capital stock, but we have no direct information on the capital income to allocate to each sector. In Jorgenson, Gollop, and Fraumeni (1987) and Jorgenson and Stiroh (2000), this allocation assumed that the nominal rate of return in the noncorporate sector matched the estimated nominal rate of return in the corporate sector after corporate taxes. Noncorporate labor income was then estimated as residual. For many of the small industries, however, this led to unstable estimates of average wages and labor quality. We now allocate income to the noncorporate sector by assuming that labor compensation per hour is equal for corporate employees and selfemployed and unpaid family workers for each type of labor in each sector. We then estimate capital income in the noncorporate sector as a residual (observed noncorporate value added less estimated noncorporate labor income). While this leads to fluctuations in the rate of return for some sectors, the relatively small size of the noncorporate sector in most industries mitigates this effect. In industries with large noncorporate sectors, like Agriculture and Real Estate, our new estimates are quite different from our earlier ones. 11.3.5 Results Table 11.8 presents growth rates of capital input, IT capital input, and non-IT capital input for the period 1977–2000. The growth of capital input is greatest for Electronic Components and Computer Services, two IT-producing industries, and for Finance and Insurance, two IT-using industries. The second echelon of four industries includes Other Business
Table 11.8
Growth of industry capital input, 1977–2000
Industry name
Total
IT
Non-IT
Agriculture Non-energy mining Coal mining Oil and gas mining Construction Lumber, wood, furniture Stone, clay, glass Primary metal Fabricated metal Machinery, excl. computers Computers and office equipment Other electrical machinery Communications equipment Electronic components Motor vehicles Other transportation equipment Instruments and miscellaneous mfg. Food and tobacco Textiles, apparel, leather Paper Printing and publishing Chemicals Petroleum refining Rubber and plastic Transportation Communications Electricity Gas Wholesale trade Retail and eating Finance Insurance Real estate (rental) Computer services Business svc., excl. computer Health, private Legal Education, private Professional and social svcs. Other services Government enterprises Household Government, excl. education Government education
0.48 1.35 0.63 2.47 2.27 1.71 1.58 –0.10 2.04 5.37 4.58 –0.57 7.08 14.90 2.05 5.66 5.40 2.22 1.22 2.97 5.08 2.83 1.16 3.97 2.33 6.23 2.18 2.53 8.98 4.74 10.61 10.25 2.47 14.23 9.26 5.86 6.12 4.66 6.94 4.37 3.09 3.69 2.83 4.13
22.19 24.54 27.53 20.16 24.28 19.44 13.70 14.04 20.32 16.72 15.93 8.91 16.56 24.38 18.65 20.88 22.52 18.43 17.06 16.61 22.55 21.79 10.84 18.79 17.32 7.03 18.56 25.86 24.13 28.79 21.97 20.80 15.29 28.93 23.97 27.45 21.61 14.12 24.73 16.35 16.18 13.99 13.41 14.70
0.37 0.97 0.33 2.16 1.58 1.14 0.63 –0.54 1.20 2.86 2.07 –2.38 5.27 13.09 1.41 3.11 2.75 1.68 0.42 2.47 2.17 2.08 0.95 3.16 1.18 5.27 1.57 1.24 4.87 3.37 6.86 7.36 2.11 8.17 3.20 4.71 2.88 3.39 4.36 3.61 1.65 3.13 1.64 2.93
4.36
19.36
2.78
Average
Note: All figures are average annual growth rates. IT capital includes computer hardware, computer software, and telecommunications equipment.
432
Dale W. Jorgenson, Mun S. Ho, and Kevin J. Stiroh
Services, Wholesale Trade, Communications Equipment, and Professional and Social Services. While growth of capital input for Computers is substantial, this industry falls below the second echelon. Growth of IT capital input dominates that of non-IT capital input for all industries. The growth rate of IT capital input is highest for Computer Services, Retail and Eating, Coal Mining, and Health Services, while the second echelon of four industries includes Gas Utilities, Professional and Social Services, Non-Energy Mining, and Electronic Components. Electronic Components and Computer Services, two IT-producing industries, and Finance and Insurance, two IT-using industries, are also the leaders in the growth of non-IT capital input. The second echelon includes Communications Services and Communications Equipment as well as Wholesale Trade and Health Services. Table 11.9 shows growth rates of capital input for the three subperiods. Electronic Components and Computer Services, along with Finance and Insurance, lead before 1990, with Other Services replacing Finance during 1990–95. Computers, Communications Equipment, Computer Services, and Other Business Services lead the surge in investment after 1995. The second echelon of four industries in at least one subperiod includes nine additional industries, indicating the breadth of the importance of investment. Table 11.10 gives growth rates of IT capital input for the three subperiods. Before 1990 the leading sectors are Retail and Eating, Gas Utilities, Computer Services, and Health Services. During the period 1990–95 new industries lead in growth of IT capital input—Coal Mining, Construction, and Non-energy Mining, as well as Electronic Components. The surge in growth of capital input after 1995 is led by Computers, Computer Services, Other Business Services, and Stone, Clay, and Glass. The second echelon of four industries leading in at least one subperiod includes nine additional industries. Finally, table 11.11 presents growth rates of non-IT Capital Input for the three subperiods. Before 1990 Electronic Components, Insurance, Finance, and Computer Services lead in non-IT Investment, while Other Services displaces Finance during the period 1990–95. Communications, Computers, and Computer Services, together with Construction, emerge into the lead after 1995. The second echelon of four industries in at least one subperiod includes six additional industries. The most important overall conclusion from these results is the domination of IT investment over non-IT investment for the period 1977–2000 as a whole and for the three subperiods 1977–90, 1990–95, and 1995–2000. Our methodology captures this through the use of the concept of the cost of capital. Both IT-producing and IT-using industries have invested substantially in IT and non-IT capital. The IT-producing sectors are among the leading investors in IT capital, but investment in IT is very widespread, encompassing manufacturing, services, utilities, and mining industries.
Table 11.9
Growth of industry capital input, by subperiod
Industry name Agriculture Non-energy mining Coal mining Oil and gas mining Construction Lumber, wood, furniture Stone, clay, glass Primary metal Fabricated metal Machinery, excl. computers Computers and office equipment Other electrical machinery Communications equipment Electronic components Motor vehicles Other transportation equipment Instruments and miscellaneous mfg. Food and tobacco Textiles, apparel, leather Paper Printing and publishing Chemicals Petroleum refining Rubber and plastic Transportation Communications Electricity Gas Wholesale trade Retail and eating Finance Insurance Real estate (rental) Computer services Business svc., excl. computer Health, private Legal Education, private Professional and social svcs. Other services Government enterprises Household Government, excl. education Government education Average
1977–2000
1977–90
1990–95
1995–2000
0.48 1.35 0.63 2.47 2.27 1.71 1.58 –0.10 2.04 5.37 4.58 –0.57 7.08 14.90 2.05 5.66 5.40 2.22 1.22 2.97 5.08 2.83 1.16 3.97 2.33 6.23 2.18 2.53 8.98 4.74 10.61 10.25 2.47 14.23 9.26 5.86 6.12 4.66 6.94 4.37 3.09 3.69 2.83 4.13
–0.37 0.57 0.42 4.35 0.04 1.65 1.03 –0.64 1.89 4.81 3.65 2.13 7.13 18.59 1.12 8.60 5.64 2.09 1.05 3.80 5.33 2.61 1.23 2.93 1.19 5.92 3.33 2.43 8.78 4.70 12.10 11.80 2.78 14.50 8.88 6.04 7.91 5.26 7.38 3.80 3.26 3.59 3.35 5.47
0.81 1.95 0.77 –1.54 2.11 0.67 –1.00 –0.36 1.25 4.95 –3.59 –5.00 –0.85 13.86 3.09 –0.73 3.77 1.75 1.95 1.86 2.86 2.93 3.14 4.71 1.86 4.74 0.46 2.11 7.44 4.28 5.76 8.42 1.46 10.39 2.32 4.79 0.86 1.94 3.83 6.09 3.08 2.76 2.35 3.35
2.37 2.77 1.07 1.60 8.23 2.91 5.59 1.56 3.22 7.25 15.19 –3.15 14.88 6.33 3.43 4.39 6.42 3.01 0.94 1.92 6.68 3.29 –1.01 5.94 5.78 8.50 0.92 3.21 11.04 5.32 11.60 8.05 2.68 17.36 17.19 6.46 6.72 5.80 8.90 4.13 2.64 4.85 1.98 1.40
4.36
4.59
2.67
5.44
Note: All figures are average annual growth rates.
Table 11.10
Growth of industry IT capital input, by subperiod
Industry name
1977–2000
1977–90
1990–95
1995–2000
Agriculture Non-energy mining Coal mining Oil and gas mining Construction Lumber, wood, furniture Stone, clay, glass Primary metal Fabricated metal Machinery, excl. computers Computers and office equipment Other electrical machinery Communications equipment Electronic components Motor vehicles Other transportation equipment Instruments and miscellaneous mfg. Food and tobacco Textiles, apparel, leather Paper Printing and publishing Chemicals Petroleum refining Rubber and plastic Transportation Communications Electricity Gas Wholesale trade Retail and eating Finance Insurance Real estate (rental) Computer services Business svc., excl. computer Health, private Legal Education, private Professional and social svcs. Other services Government enterprises Household Government, excl. education Government education
22.19 24.54 27.53 20.16 24.28 19.44 13.70 14.04 20.32 16.72 15.93 8.91 16.56 24.38 18.65 20.88 22.52 18.43 17.06 16.61 22.55 21.79 10.84 18.79 17.32 7.03 18.56 25.86 24.13 28.79 21.97 20.80 15.29 28.93 23.97 27.45 21.61 14.12 24.73 16.35 16.18 13.99 13.41 14.70
23.38 24.48 26.84 26.28 19.02 21.27 12.10 14.82 22.76 16.38 15.21 12.52 17.53 28.98 20.40 30.16 27.90 21.64 17.14 18.93 26.07 26.26 13.15 17.42 17.41 6.25 28.06 38.11 27.44 39.00 24.55 22.18 13.32 36.56 30.94 34.78 28.37 11.93 30.79 15.61 18.90 12.06 14.20 16.32
23.48 34.85 41.01 9.44 37.54 15.04 6.43 11.06 16.23 12.83 4.29 1.97 6.13 20.84 15.88 –1.20 13.47 11.78 19.23 13.11 14.33 15.18 11.80 19.35 14.41 5.32 3.98 11.83 17.92 10.24 13.07 17.60 15.51 10.43 2.36 14.33 5.00 12.51 10.00 16.77 10.65 13.54 9.35 10.35
17.81 14.36 15.81 14.95 24.72 19.07 25.13 14.97 18.05 21.51 29.44 6.47 24.49 15.94 16.88 18.82 17.59 16.75 14.69 14.09 21.62 16.81 3.88 21.81 19.98 10.79 8.46 8.04 21.75 20.82 24.18 20.43 20.20 27.61 27.44 21.51 20.65 21.42 23.71 17.84 14.64 19.46 15.43 14.85
Average
19.36
21.99
13.62
18.29
Note: All figures are average annual growth rates. IT capital includes computer hardware, computer software, and telecommunications equipment.
Table 11.11
Growth of industry non-IT capital input, by subperiod
Industry name Agriculture Non-energy mining Coal mining Oil and gas mining Construction Lumber, wood, furniture Stone, clay, glass Primary metal Fabricated metal Machinery, excl. computers Computers and office equipment Other electrical machinery Communications equipment Electronic components Motor vehicles Other transportation equipment Instruments and miscellaneous mfg. Food and tobacco Textiles, apparel, leather Paper Printing and publishing Chemicals Petroleum refining Rubber and plastic Transportation Communications Electricity Gas Wholesale trade Retail and eating Finance Insurance Real estate (rental) Computer services Business svc., excl. computer Health, private Legal Education, private Professional and social svcs. Other services Government enterprises Household Government, excl. education Government education Average
1977–2000
1977–90
1990–95
1995–2000
0.37 0.97 0.33 2.16 1.58 1.14 0.63 –0.54 1.20 2.86 2.07 –2.38 5.27 13.09 1.41 3.11 2.75 1.68 0.42 2.47 2.17 2.08 0.95 3.16 1.18 5.27 1.57 1.24 4.87 3.37 6.86 7.36 2.11 8.17 3.10 4.71 2.88 3.39 4.36 3.61 1.65 3.13 1.64 2.93
–0.41 0.45 0.36 4.09 –0.10 1.19 0.02 –1.03 1.25 2.73 1.56 0.14 5.14 16.60 0.44 5.15 3.48 1.58 0.54 3.37 3.40 1.95 1.00 2.35 0.54 5.48 2.52 1.27 5.61 3.21 8.83 9.47 2.60 6.55 0.93 5.00 4.71 4.60 4.91 3.17 2.02 3.24 2.50 4.62
0.62 1.12 0.07 –1.83 0.62 0.11 –1.48 –0.79 0.26 2.97 –5.56 –6.25 –2.10 12.61 2.53 –0.36 1.13 1.31 0.90 1.34 –0.12 2.17 2.91 3.75 0.49 4.07 0.21 0.40 2.96 3.63 2.83 5.50 1.02 10.54 2.47 3.93 –0.71 0.60 2.40 5.38 1.98 2.14 1.42 2.42
2.14 2.18 0.49 1.16 6.90 2.03 4.32 0.99 2.04 3.08 11.02 –5.04 12.98 4.43 2.80 1.27 2.45 2.30 –0.38 1.27 1.26 2.34 –1.13 4.68 3.56 5.93 0.49 2.03 4.86 3.53 5.75 3.75 1.93 10.00 9.82 4.73 1.74 3.03 4.90 3.01 0.34 3.86 –0.36 –0.94
2.78
3.11
1.58
3.13
Note: All figures are average annual growth rates.
436
Dale W. Jorgenson, Mun S. Ho, and Kevin J. Stiroh
11.4 Measuring Labor Input Our methodology for deriving labor input was introduced by Jorgenson and Griliches (1995), who constructed an index number of aggregate labor input, based on labor compensation data for male workers, classified by educational attainment. This was extended to the industry level and greatly disaggregated by age, sex, occupation, and class of employment, as well as educational attainment, in Jorgenson, Gollop, and Fraumeni (1987, chap. 3). These labor indexes were updated through 1995 with several important modifications by Ho and Jorgenson (1999). Our major innovation is to incorporate individual data from the CPS for every year between 1977 and 2000. In this section we describe the construction of labor input indexes by industry, including subindexes for college-educated and non-collegeeducated workers. Our discussion will be brief and concentrate on recent changes in methodology since the overall approach is well known. To avoid common misunderstandings we first define terms. Following the United Nations (1993) System of National Accounts 1993 we use the terms volume and quantity index interchangeably. Our price index is a “constant quality” price index: the concept of quality is used to distinguish the volume index from a simple sum of hours worked. The term constant quality refers to the assumption that each type of worker has a constant effectiveness over time. The volume index of labor input takes into account the substitutions among different types of workers by weighting the components by their prices. The quality of hours worked is defined as the ratio of the volume of labor input to hours worked. The concepts of volume and labor quality are useful for avoiding the confusion between substitution and TFP growth. Confusion may arise because the literature on production theory also includes the concept of “labor-augmenting” productivity growth. When productivity growth is labor augmenting, instead of writing the production function as in equation (1), we have (22)
Yt f(K t, At Lt),
where At is the labor-augmentation factor. One approach to measuring TFP is to use hours worked as an index of labor input Lt and calculate changes in the labor-augmentation factor At from the productivity residual. One might then say that the introduction of a concept of labor quality is a “semantic shift” that amounts to relabeling productivity. This confuses two distinct ideas. We use a constant-quality labor input index Lt rather than hours worked and an index of the quality of hours worked as a measure of compositional change. Productivity change is the residual calculated from this labor input index, and there is no separate role for labor-augmenting productivity change, so that no semantic shift is involved.
Investments in Information Technology and Higher Education
437
Our objective is to construct a measure of labor input that accounts for heterogeneity in hours worked and yet is tractable to implement. We have chosen to classify the workers by sex, age (seven classes), educational attainment (six classes), employment class (two types), and industry as described in table 11A.1. There is a total of 2 7 6 2 168 types of workers for each of 44 industries.15 We focus on 44 industries in this paper, giving a total of 168 44 7,392 cells.16 With this framework in mind, we now describe our implementation, beginning with the notation. 11.4.1 Notation saecj Esaecj hsaecj wsaecj csaej Hsaecj Hlj Ll, j PL,l, j
subscripts for sex, age, education, class, industry Employment matrix, number of workers in cell s, a, e, c, j Average hours per week in cell s, a, e, c, j Average weeks per year in cell s, a, e, c, j Average hourly compensation of employees in cell s, a, e, j hours worked by all workers in cell s, a, e, c, j abbreviation for Hsaecj: l 1 is s 1, a 1, e 1, c 1; l 2 is s 1, a 1, e 1, c 2 . . . , l 168 is s 2, a 7, e 6, c 2 labor input of cell l in industry j price of labor input of cell l
11.4.2 Methodology We express the industry volume of labor input as a translog index of the individual components: (23)
ln Lj ∑ vl, j ln Ll, j , l
where the vl, j weights are the two-period average share of each type of labor in total labor input. To quantify the impact of substitution among different types of labor input we assume that labor input for each category {Ll } is proportional to hours worked {Hl }: (24)
Ll, j Ql Hl, j ,
where the constants of proportionality Ql transform hours worked into flows of labor services. We assume that labor services for each category of hours worked are the same at all points of time. For example, an hour 15. This differs from Jorgenson, Gollop, and Fraumeni (1987) in three ways. Jorgenson, Gollop, and Fraumeni show that occupation has little impact when the other dimensions are taken into account, so that we have eliminated this classification. Second, we have eliminated the fourteen-to-fifteen age group, following the official publications of BLS. Third, we have disaggregated the educational group “4 college” into two subgroups, “4 years college” and “5 college.” 16. The number of nonzero cells is smaller since the self-employed class is zero for many sectors.
438
Dale W. Jorgenson, Mun S. Ho, and Kevin J. Stiroh
worked by a self-employed male worker, aged thirty-four, with four years of college education, represents the same labor input in 1977 as in 2000. Under the assumption in equation (24), the labor quantity index or volume index in equation (23) may be expressed in terms of hours worked: ln Lj ∑ vl, j ln Hl, j .
(25)
l
Observations on the constants Ql are thus not required. The corresponding price of labor input is the ratio of the value of labor compensation to the volume index. The total value is simply PL, j Lj ∑ PL,l, j Ll, j .
(26)
l
Finally, the labor quality index measures the contribution of substitution among the components of labor input to the volume obtained from a given number of hours: Lj QL, j , Hj
(27) where
Hj ∑ Hl, j
(28)
l
is the unweighted sum of hours worked. To study the relative importance of workers with different levels of educational attainment, we define two further sets of indexes. The labor input of college-educated workers is defined as an aggregate over hours of the educational groups “BA” and “more than BA,” while the input of noncollege-educated workers is defined over the remaining groups—“grade 0– 8,” “some high school,” “HS diploma,” and “some college no BA.” That is, (29)
ln LCO, j ∑
vSKsaecj ln Hsaecj
ln LNC, j ∑
vUNsaecj ln Hsaecj
s,a,c,e{5, 6}
s,a,c,e{5, 6}
11.4.3 Data The main features of our procedure to construct the Hsaecj and PLsaecj matrices are the following.
• Detailed cross-classifications by characteristics of individual workers are taken from the Census of Population and the CPS. These data reflect individual incomes. Hours worked and labor compensation in each industry are obtained directly from the NIPAs. These are based on establishment data and reflect payroll records.
Investments in Information Technology and Higher Education
439
• The observational unit for establishment data is the “job” reflected on a payroll, while the corresponding unit for individual data is the “person.” A person may be a “multiple job holder,” and a job may be held by more than one person. To link persons and jobs we assemble four matrices of data—compensation, hours worked per week, weeks worked per year, and employment—cross-classified by the characteristics of individual workers. • The NIPA hours produced by the BEA are adjusted to the BLS Survey of Hours at Work, since hours worked, rather than the hours paid recorded in the establishment data, is the appropriate measure of labor effort. • Individual data provide estimates of wages, but payroll records also give fringe benefits. Our method requires the price of labor paid by employers, and we therefore scale the wages to total labor compensation in each industry given in the NIPAs. • Self-employed individuals are a special problem since their labor and property compensation are not reported separately. In this study labor compensation of the self-employed per hour worked is assumed to be equal to that of the employees for each type of worker. We begin with the public use tapes from the decennial Censuses of Population for 1990, 1980, and 1970, covering 1 percent of the population in 1990 and 1980 and 0.1 percent of the population in 1970. We employ individual observations for about one million workers to construct benchmark matrices of employment (Esaecj ), weekly hours (hsaecj ), weeks per year (wsaecj ), and labor compensation (csaej ). For each worker, we collect data on age, highest grade completed, class of employment, primary industry of employment, weekly hours worked, weeks worked the previous year, and wage and salary income of employees from the previous year. This data set also provides population weights that allow us to derive national totals. The benchmark matrices are adjusted so that they equal the aggregate tables published by the Bureau of the Census, using the method of iterative proportional fitting described in detail by Jorgenson, Gollop, and Fraumeni (1987, chap. 3, pp. 72–75). We make some additional refinements to deal with special features of labor data. First, there are multiple job holders working in more than one industry. We use detailed information on this from the May CPS to allocate hours for individual workers among industries and employment classes. Second, information on income is “top-coded.” For each industry we fitted a lognormal distribution and used this to estimate the average income for the upper tail of the distribution. Third, the Census of Population for 1990 defines educational attainment as highest degree achieved. We used the estimates in Jaeger (1997) to bridge the new census definition to the old definitions that apply prior to the 1992 data. In the next step we employ the CPS Annual Demographic File from the
440
Dale W. Jorgenson, Mun S. Ho, and Kevin J. Stiroh
March surveys to estimate employment, hours, and compensation matrices for each year, again at the individual level. For the years 1977–91 we use the files reorganized by Unicon Research Corporation (2002) so that they have a convenient consistent structure through time. For years after 1991 we use the CPS March files directly. The data after 1991 are based on the new educational classifications of highest degree attained rather than the old classifications, number of years of schooling. These surveys have sample sizes of about 80,000 workers, only a tenth of the Census of Population. We do not use these annual data to estimate the matrix of 7,392 cells directly, but only to estimate matrices of smaller dimension. For example, for employment we estimate three marginal matrices: (a) matrix with sex, class of employment, age, and education; (b) matrix with sixty-six industries and class of employment; and (c) matrix with nineteen industries and education. For weekly hours and annual weeks we cross-classify by sex, class, age, and ten industries. For hourly compensation of employees we estimate one matrix with sex, age, and education, and another with nineteen industries and education.17 Our extension of the industry classification to the IT-producing sectors raises specific issues, since some of them are not explicitly identified in the Census-CPS industry classification. Only one total is given for Electrical Machinery and Equipment; we therefore had to assume the same distribution of characteristics for Communications Equipment, Electronic Components, and Other Electrical Machinery. The data for Computer Services are similarly not separated from Other Business Services prior to 1970. Fortunately, the BLS Industry Output and Employment data include these sectors and provide industry totals. Our first step is to scale employment, hours, weeks, and compensation matrices based on the March CPS to the BLS annual tabulations based on the monthly CPS.18 These matrices are taken as marginal totals to interpolate and extrapolate the 1970, 1980, and 1990 census benchmarks. An initial guess at the full matrix is derived from the nearest two censuses and made consistent with these marginal totals for every year between 1970 and 1990. For years after 1990 we extrapolated the 1990 census in the same way. The census-based matrices are calibrated to the BLS totals for 1970, 1980 and 1990 in order to provide a smooth series. A further refinement would be, for example, to revise the 1981 CPS to take into account the 1980 17. We are able to capture much of the annual variation with these marginal matrices. Of course, we cannot capture as much detail as the census 1 percent sample; for example, we have nineteen industry groupings cross classified by education, one of which is all of mining. Thus, for our forty-four sectors we assume that the education distribution for the three mining sectors is that found in the census but scaled to match the all-mining group. 18. A list of BLS reports is given in appendix A of Jorgenson, Gollop, and Fraumeni (1987). These include Employment and Earnings, Special Labor Force Reports, and BLS Bulletins. Other unpublished tabulations such as the educational attainment of self-employed and unpaid family workers were kindly provided by Tom Hale and Tom Nardone of the BLS.
Investments in Information Technology and Higher Education
441
census, but this could only be done with detailed information available within the Bureau of the Census. The CPS covers only the civilian population. For data on military workers we turn to data from the Defense Manpower Data Center,19 which provides information on very similar demographic groups. Another detail to note is that the CPS also top-codes the income information. Since 1996 they have provided the average of the top-coded wages, for years prior to 1992 we use the estimates in BLS (1993), appendix table E-1. For the years 1992–95 we use the averages we estimated from the 1990 census. From the time series of matrices on employment, hours, weeks, and compensation based on individual data we generate a time series based on establishment data given in the NIPAs. The NIPA gives the total number of employees and self-employed for detailed industries.20 For the IT sectors not broken out in the NIPAs, these are allocated according to the employment data from the BLS 192-sector data. The number of workers is summed over the detailed sectors so that they match our industry classification for each class of employment, which we denote by NIPA(c, j ). The NIPA does not include unpaid family workers, and these are taken from a separate BLS tabulation. The establishment-based employment matrix, denoted EE, is derived by scaling each part of the individual-based employment Esaecj equally to these NIPA totals: Esaecj → EEsaecj such that ∑ EEsaecj NIPA (c, j ). sae
The NIPA gives the total hours worked by employees for fifteen industry groups, which we denote by NIPA_H(IND).21 Using the hours and weeks matrices described above we generate the establishment-based hours, denoted HE, on a fifty-two-weeks-per-year basis by scaling to this control total for each industry group: hsaecj , wsaecj → HEsaecj
∑
52 EEsaecj HEsaecj NIPA_H(IND);
sae, jIND
c employee For the self-employed and unpaid family workers, the BLS Division of Productivity Research provides an estimate of total nonfarm hours for this group. We scale hours for these workers so that the sum over the nonfarm 19. We thank Mike Dove and Scott Segerman of the Defense Manpower Data Center (DMDC) for making these data available to us. 20. Survey of Current Business, August 2001, tables 6.4C, 6.7C, and 6.8C give the most recent data. The time series is available at http://www.bea.gov. The number of self-employed for each sector is derived from subtracting the number of employees from total workers. 21. See Survey of Current Business, August 2001, table 6.9C. Kurt Kunze kindly provided separate information for total military hours.
442
Dale W. Jorgenson, Mun S. Ho, and Kevin J. Stiroh
sectors equals this total. For hours in agriculture the Economic Research Service of the U.S. Department of Agriculture provides estimates of total hours.22 Finally, we control the compensation matrix to the NIPA’s “Compensation of Employees by Industry” and “Wage and Salary by Industry” (Survey of Current Business August 2001, tables 6.2C and 6.3C). We first adjust the labor compensation matrix csaej for employees only to the wage and salary totals for each industry. We then impute wage supplements and rescale to the “Compensation” totals, denoted NIPA_C, to get the establishment-based matrix, CE: csaej → CEsaecj ∑ 52 EEsaecj HEsaecj CEsaecj NIPA_C( j )
c employee
sae
For the self-employed and unpaid workers, we set wages equal to those of employees in each category. We then control the sum of capital and labor income to our estimate of proprietor’s income as described in section 11.3. Finally, we note one more adjustment to the NIPA data. The BEA revised the industry classification in the NIPAs from the 1972 SIC to the 1987 SIC. To obtain a consistent time series we transformed all the data prior to 1987 to the new basis using the 1987 data for both classifications provided by the BEA. 11.4.4 Issues Level of Classification The choice of a classification of workers is a crucial decision. Our classification system gives us a total of 7,392 different types of civilian labor. Characteristics of individual workers we have ignored may be important in assessing labor quality: for example, geographical location, occupation, and work experience. On the other hand, one could object that our system is too detailed to be implemented satisfactorily using the CPS with only 80,000 workers in each yearly sample. In response to the first objection we note that our characteristics have been used in other studies and have proven to capture the main features of the data at the aggregate level (e.g., BLS 1993). The size of the CPS sample makes further refinement difficult. As for the second objection, we remind the reader that we make use of the decennial census, which provides about one million observations on individual workers. The issue of sample size arises only for the annual CPS, where we are unable to capture year-to-year variation in all the detail available in the censuses. (The BLS [1993] study 22. We are grateful to Larry Rosenblum of the BLS and Eldon Ball of the Department of Agriculture Economic Research Service (ERS) for generously sharing their unpublished data with us.
Investments in Information Technology and Higher Education
443
uses only the CPS data and roughly the same number of groups—sex, age, educational attainment, experience, and class of employment—but has no industry detail.) An issue that has vexed labor economists in the United States is the change in educational classification in the CPS in 1992 and in the Census of Population in 1990. Jaeger (1997) has carefully analyzed this problem, and we make use of his tables to construct a bridge to link the new classification to the old. That is, we take the 1992 CPS, bridge it to the old classification, and estimate the change in labor quality between 1990 and 1992. For 1992 onward we use the new educational classification only. Critiques of the Index Number Approach The index number approach to measuring labor input has been subjected to a number of criticisms. One is that a proper measure of labor input should account for intensity of effort, as emphasized by Becker (1985). Intensity of effort is not directly observable. However, we should note that if the intensity of work differs by our demographic categories, and this is captured by some of the observed differences in wages, then intensity of effort is implicitly included in our measures. The second, more serious, objection to the index number approach is the identification of rates of labor compensation with marginal products. What if these rates reflect “market power” by trade unions rather than productivity of workers? A conceptual approach to this issue is that market power is exercised on the supply side of the labor market by excluding individuals from jobs for which they would otherwise be qualified. This is consistent with price taking by producers demanding labor services and, therefore, with identification of wages with marginal products. A similar issue arises in discrimination by groups of workers who choose to segregate themselves by age, sex, race, or institution of undergraduate origin. Again, firms can be viewed as price takers, as required in modeling producer behavior.23 A more subtle criticism of equating compensation with marginal products arises from the “signaling” hypothesis of Spence (1973). The signaling hypothesis has a number of testable empirical implications. Tests like those reported by Kroch and Sjoblom (1994) have provided support for the human capital approach rather than the hypothesis of asymmetric information about ability. For example, Kroch and Sjoblom use two measures of education in modeling wages for individual workers—years of schooling completed and rank in the educational distribution, a proxy for unobserved ability under the signaling hypothesis. Using longitudinal data for individuals, years of schooling clearly dominates rank, which gives some confidence that the distortion due to signaling is small. 23. These “market imperfections” are discussed BLS (1993), pp. 42–43.
444
Dale W. Jorgenson, Mun S. Ho, and Kevin J. Stiroh
11.4.5 Results Table 11.12 gives the growth rates of labor input, college-educated labor input, and non-college-educated labor input for the period 1977–2000. The growth of labor input is greatest for Computer Services, the IT-producing industry that generates software, and for Other Business Services. These industries also lead in the growth of college-educated and non-collegeeducated labor inputs. Health Services and Professional and Social Services round out the list of four leading industries in the growth of labor input, while the second echelon of four industries includes Legal Services, Private Education, Finance, and Construction. The three remaining ITproducing industries—Computers, Communications Equipment, and Computer Services—all experienced substantial growth in labor input but did not rise to the level of the leading industries. At the other end, the biggest shrinkages in labor input are in Coal Mining, Textiles and Apparel, and Primary Metals. Growth in college-educated labor input dominates that of non-collegeeducated input for nearly all industries with the exceptions of Legal Services and Public Education. Both college and noncollege labor input decline for Coal Mining, but eighteen of our forty-three industries with labor input reduced labor input for non-college-educated workers (the Household sector is the one industry with no labor input). The list of four leading industries in the growth of college-educated labor input is rounded out by Finance and Other Services, while the remaining leaders in growth of non-college-educated input are Legal and Health Services. Table 11.13 gives growth rates of labor input for the three subperiods. Computer Services is the leading industry in all three, while Other Business Services leads in two of the three, relinquishing second position in 1990– 95 to Motor Vehicles. The four leading industries before 1990 also include Legal and Health Services, while Transportation and Private Education round out the list for 1990–95. The surge in employment after 1995 is led by Communications and Private Education. The second echelon of four industries in at least one subperiod also encompasses six other industries, including Computers and Electronic Components. Table 11.14 gives growth rates of college-educated labor input for the three subperiods. Computer Services and Other Business Services led before 1990 and after 1995, but Agriculture and Motor Vehicles had the highest growth rates during 1990–95. The list of the top four industries before 1990 includes Health and Legal Services, while Motor Vehicles and Communications joined the list after 1995. During the period 1990–95 the four leading industries also include Computer Services and Rubber and Plastics. The second echelon of four industries in at least one subperiod includes eight other industries, indicating the pervasiveness of growth in college-educated labor input.
Table 11.12
Growth of industry labor input, 1977–2000
Industry name
Total
College
Non-college
Agriculture Non-energy mining Coal mining Oil and gas mining Construction Lumber, wood, furniture Stone, clay, glass Primary metal Fabricated metal Machinery, excl. computers Computers and office equipment Other electrical machinery Communications equipment Electronic components Motor vehicles Other transportation equipment Instruments and miscellaneous mfg. Food and tobacco Textiles, apparel, leather Paper Printing and publishing Chemicals Petroleum refining Rubber and plastic Transportation Communications Electricity Gas Wholesale trade Retail and eating Finance Insurance Real estate (rental) Computer services Business svc., excl. computer Health, private Legal Education, private Professional and social svcs. Other services Government enterprises Household Government, excl. education Government education
0.08 –0.74 –4.14 0.00 2.89 1.32 0.24 –1.71 0.43 0.10 1.11 –0.61 1.23 2.50 0.72 –0.07 0.69 –0.01 –2.35 0.39 1.92 0.58 –1.43 1.75 2.47 1.98 0.36 –0.99 1.76 1.68 3.68 1.73 2.25 10.97 6.08 4.54 3.69 3.68 3.95 2.66 1.41 0.00 0.95 1.83
2.47 0.17 –3.36 1.10 4.16 2.82 2.19 0.74 2.09 1.46 3.47 1.64 3.52 4.90 3.95 3.77 4.22 2.72 0.32 2.48 4.71 3.27 0.13 3.36 5.60 5.55 3.67 1.85 3.00 3.32 6.08 3.04 3.28 12.48 6.70 5.43 3.76 4.27 4.83 5.96 3.42 0.00 3.69 0.83
–0.47 –0.94 –4.25 –0.60 2.73 1.11 –0.07 –2.08 0.18 –0.18 –0.39 –1.74 0.35 1.64 0.36 –1.62 –0.72 –0.33 –2.78 –0.04 0.88 –0.74 –2.18 1.42 1.92 0.67 –0.86 –2.00 1.28 1.33 1.22 0.86 1.64 9.52 5.87 3.81 3.95 2.20 2.87 2.15 0.96 0.00 –1.32 4.09
1.38
3.33
0.69
Average
Note: All figures are average annual growth rates.
Table 11.13
Growth of labor input, by subperiod
Industry name Agriculture Non-energy mining Coal mining Oil and gas mining Construction Lumber, wood, furniture Stone, clay, glass Primary metal Fabricated metal Machinery, excl. computers Computers and office equipment Other electrical machinery Communications equipment Electronic components Motor vehicles Other transportation equipment Instruments and miscellaneous mfg. Food and tobacco Textiles, apparel, leather Paper Printing and publishing Chemicals Petroleum refining Rubber and plastic Transportation Communications Electricity Gas Wholesale trade Retail and eating Finance Insurance Real estate (rental) Computer services Business svc., excl. computer Health, private Legal Education, private Professional and social svcs. Other services Government enterprises Household Government, excl. education Government education Average
1977–2000
1977–90
1990–95
1995–2000
0.08 –0.74 –4.14 0.00 2.89 1.32 0.24 –1.71 0.43 0.10 1.11 –0.61 1.23 2.50 0.72 –0.07 0.69 –0.01 –2.35 0.39 1.92 0.58 –1.43 1.75 2.47 1.98 0.36 –0.99 1.76 1.68 3.68 1.73 2.25 10.97 6.08 4.54 3.69 3.68 3.95 2.66 1.41 0.00 0.95 1.83
–0.23 –0.52 –2.59 1.82 2.83 0.86 –0.56 –2.96 –0.35 –0.65 3.52 –0.96 1.57 2.85 –0.87 2.05 0.96 –0.39 –1.79 0.79 2.94 0.80 –1.21 1.74 2.00 1.12 2.03 –0.23 2.12 1.86 4.41 2.65 2.94 12.25 7.02 5.64 6.77 3.25 4.18 3.17 2.34 0.00 1.77 1.48
1.26 –0.42 –5.69 –2.53 1.39 1.83 0.50 –0.55 1.06 1.54 –3.97 –0.04 0.88 0.73 4.21 –6.87 –0.41 0.62 –0.55 0.57 1.14 0.00 –0.88 2.64 3.77 1.34 –1.45 –0.52 1.11 1.41 1.56 0.11 1.00 5.48 3.59 3.13 –0.55 3.66 3.08 1.87 0.37 0.00 –0.60 2.24
–0.33 –1.64 –6.59 –2.19 4.56 2.01 2.08 0.37 1.81 0.62 –0.07 –0.27 0.68 3.38 1.36 1.20 1.08 0.34 –5.61 –0.82 0.04 0.57 –2.56 0.88 2.39 4.85 –2.17 –3.42 1.48 1.46 3.90 0.93 1.70 13.14 6.12 3.08 –0.09 4.79 4.23 2.13 0.05 0.00 0.36 2.30
1.38
1.78
0.63
1.12
Note: All figures are average annual growth rates.
Table 11.14
Growth of college labor input, by subperiod
Industry name Agriculture Non-energy mining Coal mining Oil and gas mining Construction Lumber, wood, furniture Stone, clay, glass Primary metal Fabricated metal Machinery, excl. computers Computers and office equipment Other electrical machinery Communications equipment Electronic components Motor vehicles Other transportation equipment Instruments and miscellaneous mfg. Food and tobacco Textiles, apparel, leather Paper Printing and publishing Chemicals Petroleum refining Rubber and plastic Transportation Communications Electricity Gas Wholesale trade Retail and eating Finance Insurance Real estate (rental) Computer services Business svc., excl. computer Health, private Legal Education, private Professional and social svcs. Other services Government enterprises Household Government, excl. education Government education Average
1977–2000
1977–90
1990–95
1995–2000
2.47 0.17 –3.36 1.10 4.16 2.82 2.19 0.74 2.09 1.46 3.47 1.64 3.52 4.90 3.95 3.77 4.22 2.72 0.32 2.48 4.71 3.27 0.13 3.36 5.60 5.55 3.67 1.85 3.00 3.32 6.08 3.04 3.28 12.48 6.70 5.43 3.76 4.27 4.83 5.96 3.42 0.00 3.69 0.83
3.95 1.50 –0.08 4.04 5.90 2.44 1.63 0.72 2.00 1.29 7.38 1.99 4.47 5.51 2.37 7.11 4.80 3.15 1.29 3.24 6.41 4.34 0.83 3.37 7.17 6.07 6.25 3.27 4.58 4.49 7.76 4.78 5.19 14.62 8.23 7.40 7.28 4.07 5.44 6.99 5.45 0.00 5.89 –0.52
6.10 2.95 –2.75 0.64 0.68 2.21 1.22 –0.60 0.16 2.80 –2.71 2.05 3.69 3.51 5.62 –5.69 2.12 2.88 2.63 3.64 4.43 2.80 1.30 5.04 4.52 3.62 0.74 1.50 0.36 0.32 2.87 0.97 0.86 5.57 2.32 2.30 –1.00 3.73 3.10 3.74 –0.65 0.00 0.16 1.68
–5.01 –6.09 –12.50 –6.07 3.08 4.41 4.61 2.15 4.27 0.59 –0.52 0.33 0.88 4.73 6.37 4.53 4.79 1.44 –4.55 –0.64 0.55 0.94 –2.86 1.67 2.58 6.10 –0.13 –1.47 1.54 3.30 4.90 0.55 0.74 13.83 7.10 3.44 –0.62 5.34 4.95 5.51 2.20 0.00 1.48 3.51
3.33
4.51
1.89
1.67
Note: All figures are average annual growth rates.
448
Dale W. Jorgenson, Mun S. Ho, and Kevin J. Stiroh
Finally, table 11.15 gives growth rates of non-college-educated labor input for the three subperiods. Service industries head the list before 1990— Computer Services, Other Business Services, Public Education, and Legal Services. Computer Services and Other Business Services also lead during 1990–95 and 1995–2000, but Motor Vehicles and Health Services follow immediately afterward during 1990–95, and Construction and Communications follow during 1995–2000. As before, the second echelon of four industries in at least one period is a lengthy one and includes four additional industries. The most important overall conclusion from these results is the domination of college-educated over non-college-educated labor input in the growth of employment for the period 1977–2000 as a whole and for the three subperiods 1977–90, 1990–95, and 1995–2000. While many industries have reduced non-college-educated labor input during this period, most have added college-educated workers. Our methodology captures the impacts of this ongoing restructuring of the labor force by weighting hours worked for each type of labor input by labor compensation per hour. Massive investments in higher education by college-educated workers before they enter the labor force are obviously essential prerequisites for this ongoing transformation of the work force. 11.5 Measuring Industry Productivity We now examine the sources of U.S. economic growth at the industry level. The contributions of capital and labor inputs and gains in aggregate TFP discussed in section 11.1 ultimately reflect the evolution of the production structure at the industry level, and it is critical to examine the component industries. Changes in this production structure cumulate into the determinants of economic growth as technologies evolve, and prices and economic incentives are altered accordingly. 11.5.1 Methodology Our methodology for measuring TFP at the industry level begins with an industry production function: (30)
Yj f(Kj , Lj , Xj , T ),
where Y is industry output,24 K is capital input, L is labor input, X is intermediate input, and T is an indicator of efficiency, all for industry j. The variables K, L, and X are each aggregates of the many components described in the preceding sections, and the production function in equation (30) is assumed to be separable in these components. 24. We refer to this simply as “output,” although the term gross output is sometimes used to distinguish this measure from value added.
Table 11.15
Growth of non-college labor input, by subperiod
Industry name Agriculture Non-energy mining Coal mining Oil and gas mining Construction Lumber, wood, furniture Stone, clay, glass Primary metal Fabricated metal Machinery, excl. computers Computers and office equipment Other electrical machinery Communications equipment Electronic components Motor vehicles Other transportation equipment Instruments and miscellaneous mfg. Food and tobacco Textiles, apparel, leather Paper Printing and publishing Chemicals Petroleum refining Rubber and plastic Transportation Communications Electricity Gas Wholesale trade Retail and eating Finance Insurance Real estate (rental) Computer services Business svc., excl. computer Health, private Legal Education, private Professional and social svcs. Other services Government enterprises Household Government, excl. education Government education Average
1977–2000
1977–90
1990–95
1995–2000
–0.47 –0.94 –4.25 –0.60 2.73 1.11 –0.07 –2.08 0.18 –0.18 –0.39 –1.74 0.35 1.64 0.36 –1.62 –0.72 –0.33 –2.78 –0.04 0.88 –0.74 –2.18 1.42 1.92 0.67 –0.86 –2.00 1.28 1.33 1.22 0.86 1.64 9.52 5.87 3.81 3.95 2.20 2.87 2.15 0.96 0.00 –1.32 4.09
–1.17 –0.97 –2.95 0.60 2.41 0.66 –0.88 –3.46 –0.68 –1.01 1.20 –2.24 0.53 1.91 –1.14 0.51 –0.21 –0.74 –2.17 0.38 1.87 –0.68 –2.03 1.46 1.11 –0.45 0.75 –1.29 1.11 1.34 1.72 1.31 1.49 9.94 6.45 4.30 5.51 1.41 2.75 2.66 1.63 0.00 –1.39 5.61
–0.31 –1.25 –6.18 –4.73 1.49 1.78 0.37 –0.53 1.23 1.14 –5.58 –1.59 –0.44 –0.50 4.03 –7.66 –1.66 0.22 –1.24 –0.29 –0.57 –1.91 –2.40 2.06 3.59 0.12 –2.54 –1.43 1.52 1.72 –0.46 –0.63 0.95 5.65 4.50 3.80 2.18 3.32 3.09 1.47 0.65 0.00 –1.44 2.98
1.18 –0.54 –5.69 0.42 4.79 1.61 1.59 –0.05 1.38 0.67 0.67 –0.58 0.67 3.09 0.61 –1.10 –1.10 0.19 –5.90 –0.88 –0.26 0.30 –2.35 0.68 2.37 4.12 –3.35 –4.42 1.48 0.91 1.59 1.19 2.72 12.29 5.73 2.54 1.68 3.16 2.95 1.50 –0.49 0.00 –1.03 1.25
0.69
0.86
0.11
0.83
Note: All figures are average annual growth rates.
450
Dale W. Jorgenson, Mun S. Ho, and Kevin J. Stiroh
Let PYj , PK, j , PL, j , and PX, j denote the prices for outputs and the three inputs, respectively. Under the assumptions of constant returns to scale and competitive markets, a translog index of TFP growth is defined as (31)
vT, j ln Yj vK, j ln Kj vL, j ln Lj vX, j ln Xj ,
where v is the two-period average share of the subscripted input in the value of nominal output. Note that the assumptions imply that value of output is equal to the sum of outlays on all inputs, so that PY, jYj PK, j Kj PL, j Lj PX, j Xj and the value shares sum to 1.0.25 Equation (4) provides an alternative definition in terms of value added: vT, j vV, j ln Vj vK, j ln Kj vL, j ln Lj .
(32)
While useful for aggregation purposes, this definition fails to identify the role of intermediate inputs, such as semiconductors used in producing computers and communications equipment. The definition of industrylevel TFP growth in equation (31) is more useful for this purpose and will be employed in our study of growth of individual industries. Under the same assumptions as in equation (31), we decompose industry labor productivity growth, or growth of output per hour worked, as (33)
ln yj vK, j ln kj vL, j ln QL, j vX, j ln xj vT, j ,
where lowercase letters refer to output and inputs per hour worked. The terms on the right-hand side are, respectively, the contributions of capital deepening, labor quality, intermediate input deepening, and TFP growth to growth of labor productivity. We refer to equation (31) as TFP growth or productivity growth, and equation (33) as labor productivity growth. 11.5.2 Results Table 11.16 presents the sources of growth for each industry based on equation (31), where the growth of output is the sum of the contributions of capital, labor, and intermediate inputs and TFP growth. An important feature of our methodology is the explicit role provided for intermediate inputs. Consider, for example, the output of the semiconductor industry. Much of this output is invisible at the aggregate level, because semiconductor products are mainly inputs into other industries rather than deliveries to final demand as consumption and investment goods. Semiconductor inputs, however, play a key role in the improvements in the quality and performance of computers, communications equipment, instruments, and a host of other products.26 More specifically, semiconductors are an output of Electronic Compo25. See Hall (1988), Basu and Fernald (1995, 1997), and Basu, Fernald, and Shapiro (2001) for alternative assumptions and their implications. 26. See Dulberger (1993), Triplett (1996), and Oliner and Sichel (2002) for details.
Table 11.16
Sources of growth of industry output, 1977–2000 Input contributions
Industry name Agriculture Non-energy mining Coal mining Oil and gas mining Construction Lumber, wood, furniture Stone, clay, glass Primary metal Fabricated metal Machinery, excl. computers Computers and office equipment Other electrical machinery Communications equipment Electronic components Motor vehicles Other transportation equipment Instruments and miscellaneous mfg. Food and tobacco Textiles, apparel, leather Paper Printing and publishing Chemicals Petroleum refining Rubber and plastic Transportation Communications Electricity Gas Wholesale trade Retail and eating Finance Insurance Real estate (rental) Computer services Business svc., excl. computer Health, private Legal Education, private Professional and social svcs. Other services Government enterprises Household Government, excl. education Government education
Output
Capital
Labor
Intermediate
TFP
2.40 1.53 1.89 –0.32 1.65 2.00 1.65 0.22 1.98 1.40 23.70 2.28 8.73 18.15 2.60 2.32 3.25 1.60 0.33 1.64 2.06 1.45 0.47 3.61 2.76 4.48 2.17 –1.87 4.33 2.65 7.72 2.29 2.74 13.52 5.16 3.18 1.86 2.45 4.36 3.29 2.46 3.69 1.38 2.13
0.08 0.34 0.24 1.09 0.14 0.18 0.24 –0.03 0.26 0.63 0.53 –0.06 1.87 1.99 0.15 0.33 0.57 0.27 0.10 0.38 0.72 0.55 0.08 0.41 0.35 1.91 0.89 0.42 1.53 0.57 2.80 1.30 1.59 3.38 1.75 0.53 0.94 0.14 1.31 0.66 0.83 3.69 0.76 0.51
0.04 –0.28 –1.37 –0.01 1.12 0.38 0.08 –0.28 0.11 0.01 0.33 –0.20 0.38 0.93 0.07 –0.06 0.30 0.00 –0.66 0.09 0.78 0.12 –0.07 0.58 0.92 0.48 0.05 –0.09 0.83 0.78 1.26 0.64 0.27 4.19 3.11 2.63 1.82 1.98 1.87 1.06 0.54 0.00 0.62 1.62
0.38 0.65 0.26 –0.30 1.47 1.13 0.89 –0.34 1.17 1.28 9.96 1.70 4.77 5.56 2.45 1.64 2.03 0.90 –0.20 0.96 1.09 0.86 0.95 1.33 1.35 1.99 0.88 –1.30 0.88 1.21 3.04 2.06 0.68 5.98 0.69 1.52 0.22 1.21 1.45 1.59 1.08 0.00 0.00 0.00
1.90 0.82 2.77 –1.10 –1.08 0.31 0.44 0.87 0.45 –0.52 12.87 0.84 1.70 9.67 –0.07 0.40 0.35 0.43 1.10 0.21 –0.53 –0.09 –0.49 1.29 0.15 0.09 0.34 –0.90 1.09 0.09 0.61 –1.71 0.21 –0.03 –0.39 –1.50 –1.11 –0.87 –0.26 –0.01 0.01 0.00 0.00 0.00
Notes: Output and total factor productivity (TFP) are average annual growth rates. Capital, labor, and intermediate inputs are average annual contributions (share-weighted growth rates).
452
Dale W. Jorgenson, Mun S. Ho, and Kevin J. Stiroh
nents but appear as intermediate inputs into Computers, Communications Equipment, and other industries. Price declines resulting from improvements in semiconductor technology are reflected in the large contributions of intermediate inputs in the industries that consume semiconductors. By accurately accounting for intermediate inputs through the use of interindustry transactions tables we can allocate U.S. economic growth to its sources in individual industries. The considerable impact of intermediate inputs on the growth of industry output is strikingly apparent in table 11.16. Intermediate input is the key contributor to the growth of output in Computers and Communications Equipment, as well as Computer Services (which produces software) and the Electronic Components industry. Intermediate input makes negative contributions to growth in Oil and Gas Mining; Primary Metal; Textiles, Apparel, and Leather; and Gas Utilities. The important role of intermediate input is entirely suppressed by using value added, rather than output, as a measure of activity at the industry level. We conclude that an approach based on equation (31) should be used wherever possible, as recommended by Schreyer (2001). Investments in tangible assets and human capital are very important contributors to the growth of output. The contributions of capital input are positive for every industry, except Other Electrical Machinery, reflecting the decline in the low-end items like radios and TVs. Ignoring private households, which by definition have the largest contribution of capital input, the sectors where capital input are particularly significant are Computer Services, Finance, and Electronic Components. Labor input makes large positive contributions to Computer Services, Other Business Services, Health Services, and Private Education. The contributions of labor input are negative for nine industries; the sharpest declines are in Coal Mining and in Textiles, Apparel, and Leather. Since labor input is an important source of aggregate economic growth, these are outweighed by positive contributions from the remaining thirty-two industries. The final factor for the growth of output identified in table 11.16 is TFP growth. Computers and Electronic Components have the most dramatic contributions of TFP growth, where it accounts for about half of the output growth in these two industries. Productivity growth is also relatively important in Communications Equipment, but Computer Services, the remaining IT-producing industry, had negative productivity growth throughout the period 1977–2000. Coal Mining outranks both of these industries in terms of productivity growth. Sixteen of the forty-one industries with nonzero productivity growth have negative contributions of productivity growth throughout the period. The perplexing phenomenon of negative productivity growth at the industry level was a primary motivation for the path-breaking research of Corrado and Slifman (1999) and Gullickson and Harper (1999).
Investments in Information Technology and Higher Education
453
One possible interpretation of negative TFP growth is a failure to account for changes in quality in the measurement of output. If price increases were systematically overstated due to a failure to hold quality constant, the growth of output and TFP would be correspondingly understated. An alternative and equally plausible explanation is that growth cannot be identified with changes in technology but rather reflects the growth of inputs. In this view, negative productivity growth reflects worsening productive efficiency due to such factors as rising barriers to entry, growing inflexibility in the allocation of labor, and health and safety regulations. Sorting out these alternatives remains a research topic of considerable importance. In addition to TFP, equation (33) outlines a methodology for analyzing the sources of labor productivity growth in terms of capital and input deepening, and labor quality. Capital deepening is defined in the same way as at the aggregate level, and intermediate deepening is defined analogously. The contribution of labor quality to labor productivity growth is the product of the value share of labor and the growth rate of labor input per hour worked. Finally, TFP growth contributes point for point to labor productivity growth. Our decomposition of industry labor productivity growth for the period 1977–2000 is given in table 11.17. Labor productivity is defined simply as output per hour worked, so the rate of growth of output is the sum of the rates of growth of labor productivity and hours worked. Computers and Electronic Components have the highest growth rates of labor productivity for the period 1977–2000, as well as the highest growth rates of output. Communications Equipment and Coal Mining also have relative high labor productivity growth. Construction, an industry that has long puzzled productivity analysts (e.g., Baily and Gordon 1988), has a negative rate of labor productivity growth, but this is also true of Gas Utilities and Other Business Services, as well as Health, Legal, and Private Education Services. The contribution of capital deepening to labor productivity growth is positive for all industries except Construction, Finance, Electronic Components, Communications Equipment, and Coal Mining lead the list of industries with positive capital deepening. The contribution of intermediate deepening is positive for most sectors; the exceptions are Construction, Gas Utilities, Other Business Services, Legal Services, and Private Education. The contribution of labor quality reflects increases in the proportion of workers with higher marginal products and is negative for only three industries—Insurance, Other Business Services, and Legal Services. We have analyzed the role of TFP growth as a source of industry output growth in detail above. Table 11.18 provides growth rates of productivity for forty-one industries for the periods 1977–2000 and the subperiods 1977–90, 1990–95, and 1995–2000. During the resurgence of economic growth after 1995, Electronic Components and Computers achieve double-digit growth
Table 11.17
Sources of industry labor productivity growth, 1977–2000
Industry name
ALP
Capital deepening
Intermediate deepening
Labor quality
TFP
Agriculture Non-energy mining Coal mining Oil and gas mining Construction Lumber, wood, furniture Stone, clay, glass Primary metal Fabricated metal Machinery, excl. computers Computers and office equipment Other electrical machinery Communications equipment Electronic components Motor vehicles Other transportation equipment Instruments and miscellaneous mfg. Food and tobacco Textiles, apparel, leather Paper Printing and publishing Chemicals Petroleum refining Rubber and plastic Transportation Communications Electricity Gas Wholesale trade Retail and eating Finance Insurance Real estate (rental) Computer services Business svc., excl. computer Health, private Legal Education, private Professional and social svcs. Other services Government enterprises Household Government, excl. education Government education
3.02 2.70 6.68 0.38 –1.02 1.17 1.82 2.36 1.91 1.69 22.62 3.30 7.76 15.98 2.02 2.48 3.16 1.81 3.12 1.76 0.55 1.47 2.28 2.18 0.57 2.92 2.18 –0.64 2.81 1.10 5.02 0.47 0.46 2.83 –0.99 –0.48 –1.84 –0.38 0.99 0.67 1.52 3.69 1.40 0.25
0.21 0.63 1.48 1.11 –0.02 0.07 0.24 0.20 0.23 0.65 0.28 0.05 1.70 1.74 0.10 0.34 0.57 0.28 0.32 0.40 0.50 0.55 0.21 0.26 0.03 1.44 1.00 0.68 1.27 0.38 2.10 1.06 0.12 0.78 0.53 0.20 0.38 0.05 0.67 0.27 0.57 3.69 0.72 0.31
0.76 1.11 2.21 0.28 –0.01 0.65 0.99 1.21 1.12 1.42 9.46 2.27 4.29 4.46 1.98 1.70 1.98 1.06 1.59 1.03 0.41 0.89 2.53 0.52 0.28 1.28 0.78 –0.44 0.33 0.56 1.97 1.15 0.14 1.98 –1.11 0.32 –1.03 –0.01 0.31 0.40 0.75 0.00 0.00 0.00
0.14 0.14 0.23 0.09 0.09 0.15 0.15 0.08 0.11 0.14 0.02 0.13 0.07 0.11 0.02 0.04 0.27 0.03 0.12 0.13 0.17 0.13 0.02 0.11 0.10 0.11 0.07 0.02 0.12 0.06 0.34 –0.03 0.00 0.12 –0.03 0.51 –0.08 0.45 0.27 0.02 0.18 0.00 0.68 –0.06
1.90 0.82 2.77 –1.10 –1.08 0.31 0.44 0.87 0.45 –0.52 12.87 0.84 1.70 9.67 –0.07 0.40 0.35 0.43 1.10 0.21 –0.53 –0.09 –0.49 1.29 0.15 0.09 0.34 –0.90 1.09 0.09 0.61 –1.71 0.21 –0.03 –0.39 –1.50 –1.11 –0.87 –0.26 –0.01 0.01 0.00 0.00 0.00
Notes: Average labor productivity (ALP) and total factor productivity (TFP) are average annual growth rates. Capital deepening, intermediate deepening, and labor quality are average annual contributions (share-weighted growth rates).
Table 11.18
Growth of industry total factor productivity, by subperiod
Industry name Agriculture Non-energy mining Coal mining Oil and gas mining Construction Lumber, wood, furniture Stone, clay, glass Primary metal Fabricated metal Machinery, excl. computers Computers and office equipment Other electrical machinery Communications equipment Electronic components Motor vehicles Other transportation equipment Instruments and miscellaneous mfg. Food and tobacco Textiles, apparel, leather Paper Printing and publishing Chemicals Petroleum refining Rubber and plastic Transportation Communications Electricity Gas Wholesale trade Retail and eating Finance Insurance Real estate (rental) Computer services Business svc., excl. computer Health, private Legal Education, private Professional and social svcs. Other services Government enterprises Household Government, excl. education Government education
1977–2000
1977–90
1990–95
1995–2000
1.90 0.82 2.77 –1.10 –1.08 0.31 0.44 0.87 0.45 –0.52 12.87 0.84 1.70 9.67 –0.07 0.40 0.35 0.43 1.10 0.21 –0.53 –0.09 –0.49 1.29 0.15 0.09 0.34 –0.90 1.09 0.09 0.61 –1.71 0.21 –0.03 –0.39 –1.50 –1.11 –0.87 –0.26 –0.01 0.01 0.00 0.00 0.00
2.29 0.83 2.22 –3.03 –1.06 0.76 0.31 0.32 –0.29 –1.20 11.77 0.46 1.91 6.11 –0.28 0.29 0.88 0.51 0.80 –0.13 –0.73 –0.27 –0.07 1.24 0.18 0.49 –0.67 –0.88 1.50 –0.30 0.23 –2.14 0.14 0.39 –0.58 –1.68 –1.54 –0.31 0.35 –0.05 –0.40 0.00 0.00 0.00
0.83 1.16 3.39 1.84 –1.27 –1.42 0.32 0.67 1.16 0.49 11.87 1.71 3.23 10.58 –0.24 0.17 –0.45 0.63 0.83 –0.55 –1.09 –0.20 1.68 1.20 0.46 0.36 0.97 –2.19 1.03 –0.09 1.26 –0.93 0.93 1.62 0.64 –1.60 –0.65 –1.66 –0.93 –0.44 –0.37 0.00 0.00 0.00
1.94 0.46 3.57 0.97 –0.95 0.87 0.87 2.51 1.64 0.23 16.75 0.93 –0.38 18.00 0.64 0.94 –0.25 0.02 2.14 1.83 0.56 0.49 –3.72 1.51 –0.21 –1.20 2.33 0.34 0.08 1.31 0.96 –1.38 –0.35 –2.79 –0.92 –0.93 –0.45 –1.55 –1.20 0.51 1.48 0.00 0.00 0.00
Note: All figures are average annual growth rates.
456
Dale W. Jorgenson, Mun S. Ho, and Kevin J. Stiroh
rates. Total factor productivity growth in Electronic Components accelerates steadily throughout the period 1977–2000, but the industry emerges as the leader in productivity growth only after 1995. Agriculture ranks third before 1990, and Coal Mining ranks third after 1990; both industries are far behind the two productivity leaders. Communications Equipment ranks fourth from 1990 to 1995, while Primary Metal ranks fourth after 1995. The evolving patterns of TFP growth at the industry level are inconsistent with an explanation of negative productivity growth rates that relies solely on persistent errors of measurement. Rather, negative productivity growth rates appear to be a pervasive feature of industry performance in the U.S. economy across industries and over time. Only fourteen of the forty-one industries with nonzero productivity growth have positive productivity growth rates throughout the period 1977–2000, while five industries have negative productivity growth rates throughout the period. The remaining industries have both positive and negative growth rates of productivity; this clearly requires an explanation beyond persistent errors of measurement. Labor productivity growth rates by industry and by subperiod are given in table 11.19. We see the same steady acceleration in labor productivity growth as in the growth of TFP in Computers and Electronic Components. Coal Mining outstrips Communications Equipment before 1990, and Computer Services is ranked fourth from 1990 to 1995 but drops below Coal Mining after 1995. Labor productivity growth also accelerates steadily in Communications Equipment but declines after 1995 in Computer Services. Construction is the only industry that exhibits a decline in labor productivity growth for all three subperiods. 11.6 Aggregation over Industries To examine the U.S. economy as a whole, we next aggregate across industries. Jorgenson, Gollop, and Fraumeni (1987, chap. 2) show that the existence of an aggregate production function requires the very stringent assumption that value-added functions across industries are identical up to a scalar multiple. Here we employ an aggregate production possibility frontier introduced by Jorgenson (1995a) and used by Jorgenson and Stiroh (2000) and Jorgenson (2001). This methodology suppresses the industry dimension for inputs and assumes each input has the same marginal product in all industries, but relaxes the assumptions about value-added functions needed for the aggregate production function. We compare these to a third accounting approach for direct aggregation over industries. 11.6.1 Aggregation We define the production possibility frontier as the efficient combination of outputs and inputs for the economy as a whole. Value added, V, consists
Table 11.19
Growth of industry labor productivity, by subperiod
Industry name Agriculture Non-energy mining Coal mining Oil and gas mining Construction Lumber, wood, furniture Stone, clay, glass Primary metal Fabricated metal Machinery, excl. computers Computers and office equipment Other electrical machinery Communications equipment Electronic components Motor vehicles Other transportation equipment Instruments and miscellaneous mfg. Food and tobacco Textiles, apparel, leather Paper Printing and publishing Chemicals Petroleum refining Rubber and plastic Transportation Communications Electricity Gas Wholesale trade Retail and eating Finance Insurance Real estate (rental) Computer services Business svc., excl. computer Health, private Legal Education, private Professional and social svcs. Other services Government enterprises Household Government, excl. education Government education
1977–2000
1977–90
1990–95
1995–2000
3.02 2.70 6.68 0.38 –1.02 1.17 1.82 2.36 1.91 1.69 22.62 3.30 7.76 15.98 2.02 2.48 3.16 1.81 3.12 1.76 0.55 1.47 2.28 2.18 0.57 2.92 2.18 –0.64 2.81 1.10 5.02 0.47 0.46 2.83 –0.99 –0.48 –1.84 –0.38 0.99 0.67 1.52 3.69 1.40 0.25
3.39 2.49 6.55 –1.39 –1.36 1.19 1.11 1.63 0.78 0.46 18.39 2.34 4.83 9.81 0.74 1.72 2.77 1.99 2.08 1.70 0.27 0.87 1.58 1.52 0.70 3.18 0.59 –1.83 2.62 0.67 4.29 0.75 0.07 2.38 –2.50 –0.73 –3.31 0.28 1.31 –0.38 0.74 3.59 1.48 0.20
1.83 2.69 6.20 2.64 –1.04 0.38 1.28 3.42 2.45 3.13 25.14 4.92 9.33 22.45 2.38 1.47 3.25 1.67 3.92 1.74 –0.25 2.22 2.27 3.40 0.30 3.63 3.27 –1.06 3.33 0.73 5.31 0.31 1.47 6.38 0.36 –0.86 –0.72 –1.69 –0.13 1.90 1.48 2.76 1.50 0.38
3.23 3.26 7.49 2.72 –0.14 1.92 4.18 3.20 4.31 3.43 31.10 4.17 13.82 25.57 5.01 5.49 4.10 1.50 5.05 1.94 2.08 2.28 4.12 2.70 0.50 1.54 5.22 2.85 2.78 2.59 6.62 –0.10 0.45 0.47 1.60 0.57 0.87 –0.78 1.27 2.20 3.58 4.85 1.11 0.24
Note: All figures are average annual growth rates.
458
Dale W. Jorgenson, Mun S. Ho, and Kevin J. Stiroh
of value added from all J industries and is produced from primary inputs and technology as (34)
V(V1, . . . VJ ) f(K, L, T ),
where V, K, and L are aggregate value added, capital services, and labor input, respectively. The production possibility frontier does not impose the assumption of perfect substitution of value added between industries as required for existence of an aggregate production function. We define value added as a Tornqvist index over industry value added: (35)
ln V ∑ w j ln Vj , j
where w j is the two-period average share of industry value added in aggregate value added and Vj is from equation (4). There are many types of capital, Kk (e.g., computers and tractors), and labor inputs, Ll (e.g., high school–educated men and college-educated women), and market equilibrium requires that each input earns the same return in all industries. This assumption allows us to sum each input across industries to obtain aggregate capital services and labor input: (36)
Kk ∑ Kk, j j
Ll ∑ Ll, j , j
where the k subscript indexes the type of capital and l indexes the type of labor. Aggregate capital services and labor input are defined as Tornqvist indexes of all types of capital and labor, respectively: (37)
ln K ∑ w k ln Kk k
ln L ∑ w l ln Ll , l
where the weights are the two-period average share of each type of capital or labor input in total capital or labor input. PK and PL are the corresponding price indexes for capital and labor, respectively. We then define TFP growth as (38)
vT ln V vK ln K vL ln L,
where the shares are the two-period average share of each input in aggregate value added. As above, we can estimate the sources of aggregate labor productivity (value added per hour worked) as:
Investments in Information Technology and Higher Education
459
ln v vK ln k vL ln LQ vT ,
(39)
where the definition of the determinants is the same as for industries, except there is no intermediate component. An alternative approach developed by Jorgenson, Gollop, and Fraumeni (1987, chap. 2) is aggregation over industries, which provides estimates of aggregate TFP growth but also maintains the industry accounts as the basic building blocks. This avoids the assumption of mobility of inputs across industries and allows a decomposition of aggregate productivity growth to the industry-level sources.27 We begin with the definition of TFP growth from the production possibility frontier in equation (38) and industry TFP growth in equation (32). Multiply industry productivity growth (equation 33) by the industry share of aggregate value-added (w j ), divide through by the industry share of value added in output (vV, j ), and sum across all industries. This gives vK, j vL, j ∑w ln Kj ∑ w j ln Vj ∑ w j j d ln Lj . vV, j vV, j j j j j V, j Subtracting equation (40) from equation (38) yields the following decomposition of productivity growth from the aggregate production function:
(40)
w j
v ∑ v
T, j
vT
v ∑ w ln K v ln K ∑ v v v ∑ w ln L v ln L v wj
vK, j
T, j
j
j
j
V, j
j
K
V, j
L, j
j
j
j
L
V, j
w j vT ∑ vT, j REALLK REALLL vV, j j The first set of parentheses in equation (41) is the sum of “Domarweighted” industry rates of TFP growth, the second set is the reallocation of capital across industries (REALLK ), and the third is the reallocation of labor across industries (REALLL ). Productivity growth from the aggregate production possibility frontier exceeds Domar-weighted industry productivity when the reallocation terms are positive. This happens, for example, when the industries with higher prices of capital inputs have higher growth rates of capital input or when industries with the higher prices of labor inputs have higher growth rates of labor input.28
27. Note that Domar (1961) did assume mobility across industries. 28. Note that if we used capital stocks rather than capital services, there would be no REALLK term because a given asset has the same price across all industries by construction. This implies that simple sums and translog indexes across industries are identical. Service prices for each asset, however, do differ across industries due to differences in rates of returns and taxes, so the term REALLK is nonzero.
460
Dale W. Jorgenson, Mun S. Ho, and Kevin J. Stiroh
A third alternative aggregation methodology, employed by Jorgenson, Gollop, and Fraumeni (1987, chap. 9), is based on an aggregate production function. Here, the price of a unit of value added must be the same in all industries in order to reduce the production possibility frontier to an aggregate production function. Under this assumption, value added from the aggregate production function, VPF , is defined as a simple sum across industries: (42)
VPF ∑ Vj . j
We define the difference in the growth rates of value added from the production possibility frontier in equation (34) and the aggregate production function in equation (42) as the reallocation of value added. This difference reveals the error in aggregation that results from assuming the existence of an aggregation production function and equal value-added prices. These results are presented in table 11.2 and were discussed in section 11.1. An important feature of our methodology is that we are able to identify the contributions of individual industries to aggregate economic growth. This includes both the direct contribution to value added and the flows of goods and services among industries as intermediate inputs in the interindustry transactions tables. Triplett (1996), for example, has quantified the role of semiconductors as an input into the computer industry. Under plausible assumptions, falling semiconductor prices account for essentially all of the price decline in computers. Building on this observation, Oliner and Sichel (2000) constructed a model of the U.S. economy with three industries—computers, semiconductors, and all other products. The framework for aggregation over industries originated by Jorgenson, Gollop, and Fraumeni (1987, chap. 9) extends the same principles to all products and all industries. The ingenious weighting scheme in equation (40), originated by Domar (1961), plays a key role in our framework for aggregation over industries. In this scheme the growth rate of each industry’s output is weighted by the ratio of two proportions. The first is the proportion of each industry’s value added in aggregate value added. The second is the proportion of value added in the industry’s output. The ratio of these two proportions—the socalled Domar weight—captures both the relative importance of the industry in value added for the economy as a whole and the relative importance of value added in the industry’s output. Note that the sum of the Domar weights exceeds unity. 11.6.2 Aggregation Results Table 11.20 gives industry contributions to aggregate value added and TFP growth. Figure 11.1 arrays the industry contributions to value added in order of relative importance. We have weighted the growth rates of value added by industry from table 11.4 by each industry’s share of value added
Table 11.20
Industry contributions to aggregate value added and TFP growth, 1977–2000 Value added
Industry name Agriculture Non-energy mining Coal mining Oil and gas mining Construction Lumber, wood, furniture Stone, clay, glass Primary metal Fabricated metal Machinery, excl. computers Computers and office equipment Other electrical machinery Communications equipment Electronic components Motor vehicles Other transportation equipment Instruments and miscellaneous mfg. Food and tobacco Textiles, apparel, leather Paper Printing and publishing Chemicals Petroleum refining Rubber and plastic Transportation Communications Electricity Gas Wholesale trade Retail and eating Finance Insurance Real estate (rental) Computer services Business svc., excl. computer Health, private Legal Education, private Professional and social svcs. Other services (continued)
Value added Weight
Productivity
Growth
Contribution to aggregate Value added
Domar weight
Growth
Contribution to aggregate TFP
0.019 0.002 0.003 0.016 0.046 0.008 0.005 0.008 0.013 0.016
4.74 1.89 2.88 0.12 0.40 2.21 1.55 1.61 1.87 0.33
0.09 0.00 0.01 –0.03 0.01 0.02 0.01 0.00 0.02 –0.01
0.045 0.004 0.005 0.030 0.103 0.021 0.011 0.027 0.029 0.034
1.90 0.82 2.77 –1.10 –1.08 0.31 0.44 0.87 0.45 –0.52
0.09 0.00 0.01 –0.10 –0.12 0.01 0.00 0.01 0.01 –0.03
0.004 0.007 0.004 0.005 0.008
42.01 1.40 7.36 25.44 0.94
0.17 0.01 0.03 0.15 0.01
0.012 0.017 0.007 0.011 0.038
12.87 0.84 1.70 9.67 –0.07
0.16 0.01 0.01 0.12 0.00
0.011
1.38
0.02
0.023
0.40
0.01
0.014 0.019 0.009 0.008 0.013 0.019 0.004 0.007 0.029 0.020 0.017 0.004 0.051 0.064 0.032 0.015 0.057 0.008
2.23 2.51 1.49 1.80 1.73 1.44 1.26 5.23 2.77 4.47 2.01 –4.40 5.35 2.48 7.74 0.50 2.71 12.00
0.03 0.05 0.01 0.01 0.02 0.02 –0.01 0.04 0.08 0.09 0.03 –0.02 0.28 0.16 0.24 0.01 0.15 0.09
0.025 0.066 0.025 0.020 0.024 0.047 0.031 0.016 0.057 0.036 0.028 0.018 0.079 0.110 0.054 0.032 0.076 0.013
0.35 0.43 1.10 0.21 –0.53 –0.09 –0.49 1.29 0.15 0.09 0.34 –0.90 1.09 0.09 0.61 –1.71 0.21 –0.03
0.01 0.03 0.03 0.00 –0.01 –0.01 –0.01 0.02 0.01 0.00 0.01 –0.03 0.09 0.01 0.03 –0.05 0.01 –0.01
0.022 0.045 0.010 0.006
6.30 2.47 2.61 2.23
0.14 0.10 0.02 0.01
0.030 0.067 0.014 0.011
–0.39 –1.50 –1.11 –0.87
–0.01 –0.10 –0.01 –0.01
0.044 0.032
4.42 3.13
0.19 0.10
0.067 0.058
–0.26 –0.01
–0.03 0.00
462 Table 11.20
Dale W. Jorgenson, Mun S. Ho, and Kevin J. Stiroh (continued) Value added Value added Weight
Government enterprises Household Government, excl. education Government education Sum
1.00
Industry name
Productivity
Growth
Contribution to aggregate Value added
Domar weight
Growth
Contribution to aggregate TFP
0.015 0.138
2.13 3.69
0.03 0.51
0.023 0.138
0.01 0.00
0.00 0.00
0.084 0.038
1.38 2.13
0.12 0.08
0.084 0.038
0.00 0.00
0.00 0.00
3.08
1.70
0.16
Notes: All figures are annual averages. Value-added weights are industry value-added as a share of aggregate value added. Domar weights are industry output as a share of aggregate value added. A contribution is a share-weighted growth rate.
for the U.S. economy as a whole. Note that these value-added shares sum to one. Three IT-producing industries—Computers, Electronic Components, and Computer Services—have the highest rates of growth of value added. However, the contributions of these industries to value added for the economy as a whole are relatively modest, reflecting the small relative size of these industries. The Household sector is the most important contributor to the growth of aggregate value added. While the growth rate of value added in this industry is only a little above the economywide growth rate, the Household sector accounts for a whopping 13.7 percent of aggregate value added, giving its growth rate a very large weight. By comparison, the value-added weight for Computers is 0.4 percent, while that of Electronic Components is 0.5 percent. Wholesale Trade, Finance, and Professional and Social Services—all industries with very sizable weights—round out the list of leading contributors to growth of aggregate value added. Figure 11.2 ranks industries by their contributions to aggregate TFP growth. This picture contrasts sharply with that for value added. Table 11.18 has shown that Computers and Electronic Components have the highest rates of productivity growth at the industry level. Weighting these growth rates by the Domar scheme described above, we find that the contributions of these industries, 0.16 and 0.12, respectively, more than exhaust the aggregate rate of TFP growth of 0.16. One possible conclusion is that all productivity growth originates in these two IT-producing industries. However, this conclusion would be highly misleading, as the sum of the contributions of the following two industries—Agriculture and Wholesale Trade—0.09 for both industries, also exhausts TFP growth for the economy as a whole. The resolution of the apparent paradox is that aggregate TFP growth is the sum of positive and negative contributions from forty-one different in-
Investments in Information Technology and Higher Education
Fig. 11.1
463
Industry contributions to value-added growth, 1977–2000
dustries. Offsetting the large positive contributions of the four leading sectors we have identified are large negative contributions by Insurance, Oil and Gas Mining, Health Services, and Construction. The correct inference from the data on TFP growth given in table 11.20 is that industry-level TFP growth can be either negative or positive and aggregate TFP growth must give appropriate weight to both. Domar-weighted TFP growth rates given in table 11.2 above are sums of
464
Dale W. Jorgenson, Mun S. Ho, and Kevin J. Stiroh
Fig. 11.2
Industry contributions to total factor productivity, 1977–2000
Note: Industries sorted by productivity contribution.
contributions from table 11.20. These weights capture the impact of TFP growth in the industry where it takes place, as well as the industries that purchase the output of this industry as intermediate inputs and as investments. The IT component is the sum over the four IT-producing industries—Computers, Communications Equipment, Electronic Components, and Computer Services. This rises steadily over the period 1977–2000, reflecting accelerations in productivity growth for Computers and Electronic Components. The contributions of Communications Equipment and
Investments in Information Technology and Higher Education
465
Computer Services are comparable in magnitude and opposite in sign, so that these relatively small contributions are mutually offsetting. The non-IT component of the Domar-weighted growth rates given in table 11.2 is the sum over the thirty-seven non-IT industries. This component also accelerates over the period 1977–2000 but is negative before 1990, offsetting the positive contribution to aggregate productivity growth by the IT-producing industries. The non-IT component is positive after 1990 and contributes to the modest recovery of aggregate productivity growth from 1990 to 1995 and the acceleration after 1995. Table 11.2 also gives a decomposition of TFP growth. This includes the Domar-weighted productivity growth rates for the IT and non-IT sectors, as well as reallocations of capital and labor input within these sectors. These reallocations are defined precisely in equation (41) and reflect differences in the cost of capital and wage rates among industries. For the period 1977–2000 as a whole the reallocations are fairly modest in size, so that the TFP growth rate closely tracks the Domar-weighted sum of productivity growth rates at the industry level. For the subperiod 1990–95 the reallocation of non-college-educated labor input among sectors is economically significant and makes a negative contribution to TFP growth, suggesting that non-college-educated labor input is moving to sectors with lower wage rates for a given set of characteristics—age, sex, and class of employment. This movement masks part of the recovery in productivity growth in both IT and non-IT industries that took place during 1990–95 and exaggerates the acceleration in productivity growth after 1995. The Domar-weighted sum of industry productivity growth rates in table 11.2 is the most accurate indicator of this acceleration at the industry level. Our final set of results focuses on the contributions of each industry to the resurgence of U.S. TFP growth after 1995. Figure 11.3 presents Domarweighted growth rates of TFP for each industry for 1977–95 and 1995– 2000, ranked by the 1995–2000 contribution. The combined increases in two IT-producing industries, Electronic Components and Computers, account for most of the acceleration in productivity for the economy as a whole. However, it would be seriously misleading to attribute the entire aggregate increase to the two industries. The contributions of renewed productivity growth in Retail Trade and Eating and Oil and Gas Mining are comparable in magnitude to Electronic Components as a source of the resurgence of TFP growth. There are many industries that reveal increases in TFP growth and others that show declines. Wholesale Trade, Petroleum Refining, Computer Services, and Professional and Social Services are industries that experienced substantial slowdowns in TFP growth, while aggregate TFP growth accelerated. We conclude that attribution of TFP growth to industries is highly arbitrary. It is far easier and much more important to allocate the
466
Dale W. Jorgenson, Mun S. Ho, and Kevin J. Stiroh
Fig. 11.3
Domar-weighted productivity contributions 1995–2000 versus 1977–95
Note: Industries sorted by 1995–2000 productivity contribution.
contributions of capital and labor inputs to individual industries, as we have done above. This shows that the resurgence in economic growth is due largely to massive investments in IT capital input and college-educated labor input in a relatively small number of service industries, as well as Households and government sectors.
Investments in Information Technology and Higher Education
467
Greenwood, Hercowitz, and Krusell (1997, 2000) and Hercowitz (1998) have attributed some 60 percent of postwar U.S. economic growth to investment-specific productivity growth. As evidence, they note that the relative price of equipment has fallen 3 percent per year. We have incorporated this effect by calculating growth rates of productivity for all industries. Investment goods are produced by specific industries and are priced as industry outputs. As a leading example, we have shown that declining prices of IT investment goods are the consequence of productivity growth in the IT-producing sectors. 11.7 Conclusions We have completed our program of attributing U.S. economic growth to its sources at the industry level. For this purpose we have employed new conceptual tools and data, specifically industry-level models of production and the production possibility frontier for the U.S. economy as a whole. This makes it possible to incorporate data on output and intermediate input from a time series of interindustry transactions tables, identifies the impact of IT-producing industries, and facilitates the identification of the role of investments in IT equipment and higher education in U.S. economic growth. Finally, this approach embodies internationally recommended standard practices for productivity measurement, as presented by Schreyer (2001) in the OECD Productivity Manual. Our first conclusion is that many of the concepts used in earlier industry-level growth accounting should be replaced. The aggregate production function heads this list. The data indicate that the production possibility frontier better captures the impact of information technology on both outputs and inputs at the industry level. This is especially apparent when detailed industry data reveal the large changes in relative prices between ITproducing and non-IT-producing industries. Similarly, capital stock as a measure of capital input and hours worked as a measure of labor input, both still commonly used in growth accounting, ignore important compositional changes and must be replaced by the measures of capital and labor input presented above. Investments in information technology and higher education stand out as the most important sources of growth at both industry and economywide levels. While investments in IT take place within the sectors we have identified, investments in higher education are undertaken by future workers when they are enrolled in institutions of higher education, primarily colleges and universities. We have measured the growth of labor input from college-educated and non-college-educated workers, but not the massive investment that precedes it. We have measured investment and capital stock, as well as capital input, from IT and non-IT capital. The growth of TFP is an important source of growth, but it is far less sig-
468
Dale W. Jorgenson, Mun S. Ho, and Kevin J. Stiroh
nificant in economic terms than investments in information technology and higher education. Tracing this to its sources at the industry level is highly problematical, however, because industry-level TFP growth rates can be positive or negative and can alternate between the two. Nonetheless, a key force behind TFP growth at the aggregate level can be identified in the dramatic growth of productivity in the Computer and Electronic Component industries. This stands out in comparisons among industries, as well as through aggregation over industries. Total factor productivity growth outside the IT-producing sectors also makes important contributions to fluctuations in aggregate TFP growth. This is especially apparent during the post-1995 growth resurgence, when a sharp recovery in Retail Trade and Oil and Gas Mining made a significant contribution to the revival of TFP growth. However, this revival plays a modest part in the surge in growth, relative to IT investment. The dramatic IT investment boom after 1995 culminates a steady rise in the importance of this investment over the period 1977–2000. Although investment in IT dominates investment in non-IT as a source of growth of capital input, non-IT investment is also important in the growth resurgence. The growth of college-educated labor input dominates that of noncollege-educated labor input during the period of our study. This is concentrated in trade, finance, and service industries that also make large investments in IT. A possible explanation is that college-educated labor is complementary to IT capital, so that the decline in the price of IT drives up the demand for both IT capital and college-educated workers. An alternative explanation is that productivity growth is biased toward collegeeducated workers, making them relatively more productive than noncollege-educated workers. Enormous uncertainties still surround the relationship between equity valuations and future growth prospects of the American economy. One theory attributes rising equity valuations since the growth resurgence that began in 1995 as the response of equity values to the accumulation of intangible assets, such as intellectual property and organizational capital. An alternative theory treats the high valuation of technology stocks as a bubble that burst, beginning in 2000.29 Similar uncertainties characterize investments in human capital by knowledge workers. The uncertainties are magnified by the great distance in time between these investments and the compensation for the labor services of knowledge workers. The restructuring of the American economy in response to the progress of information technology has been massive and continuous. The structure of output is shifting toward the IT-producing industries, but even more substantially toward the IT-using industries. The capital deployed in the 29. Campbell and Shiller (1998) and Shiller (2000) discuss equity valuations and growth prospects. Brynjolfsson and Hitt (2000) and Hall (2001) evaluate unobservable components of investment on the basis of equity valuations.
Investments in Information Technology and Higher Education
469
economy is moving rapidly toward IT equipment and software. Finally, the composition of the work force is evolving toward more college-educated workers as investments in higher education continue to rise. These structural changes create risks and rewards for investments in both IT and higher education that businesses and individuals must learn to manage. Our final conclusion is that restructuring our official statistics to describe the ongoing structural changes more accurately is an important step in the management of the new risks that we face collectively. With the establishment of international standards for productivity measurement, the time is ripe for incorporation of industry and economywide production accounts into systems of national accounts, as proposed by Christensen and Jorgenson (1995), Jorgenson, Gollop, and Fraumeni (1987), and Jorgenson (2001). This requires integration of interindustry transactions tables, like those we have employed in measuring output and intermediate input, with accounts for gross product originating, like those we have used in allocating value added between capital and labor compensation. It also requires measures of flows of services from tangible assets and human capital, like those that provide the basis for our measures of capital and labor input.
Table 11A.1
Classification of civilian labor force No.
Categories
Sex Class Age Education 1980–92
2 2 7
Male; female Employees; self-employed and unpaid 16–17; 18–24; 25–34; 35–44; 45–54; 55–64; 65
6
1992
6
Industry
44
0–8 years grade school; 1–3 years high school; 4 years high school; 1–3 years college; 4 years college; 5 years college 0–8 years grade school; grade 9–12 no diploma; high school graduate some college no BA; bachelor’s degree; more than BA degree Industries listed in table 11.3
References Baily, Martin N. 2002. The new economy: Post mortem or second wind. Journal of Economic Perspectives 16:3–22. Baily, Martin N., and Robert J. Gordon. 1988. The productivity slowdown and explosion of computer power. Brookings Papers on Economic Activity, Issue no. 2:1–45. Baily, Martin N., and Robert Z. Lawrence. 2001. Do we have a new e-conomy? American Economic Review, Papers and Proceedings 91:308–13.
470
Dale W. Jorgenson, Mun S. Ho, and Kevin J. Stiroh
Basu, Susanto, and John G. Fernald. 1995. Are apparent productive spillovers a figment of specification error? Journal of Monetary Economics 36:165–88. ———. 1997. Returns to scale in U.S. production: Estimates and implications. Journal of Political Economy 105:249–83. Basu, Susanto, John G. Fernald, and Matthew D. Shapiro. 2001. Productivity growth in the 1990s: Technology, utilization, or adjustment? Carnegie-Rochester Conference Series on Public Policy 55:117–66. Becker, Gary. 1985. Human capital, effort, and the sexual division of labor. Journal of Labor Economics 3:S33–S58. Berndt, Ernst R., and Melvyn A. Fuss. 1986. Productivity measurement with adjustments for variations in capacity utilization and other forms of temporary equilibrium. Journal of Econometrics 33:7–29. Blades, Derek. 2001. Measuring capital: A manual on the measurement of capital stocks, consumption of fixed capital, and capital services. Paris: Organization for Economic Cooperation and Development. Bonds, Belinda, and Tim Aylor. 1998. Investment in new structures and equipment in 1992 by using industries. Survey of Current Business 78 (8): 26–51. Brynjolfsson, Erik, and Lorin M. Hitt. 2000. Beyond computation: Information technology, organization transformation, and business practices. Journal of Economic Perspectives 14:23–48. Bureau of Economic Analysis (BEA). 1997. Note on alternative measures of gross product by industry. Survey of Current Business 77 (11): 84–85. ———. 1999. Fixed reproducible tangible wealth in the United States, 1925–94. Washington, DC: U.S. Department of Commerce. Bureau of Labor Statistics (BLS). 1993. Labor composition and U.S. productivity growth, 1948–90. Bureau of Labor Statistics Bulletin no. 2426. Washington, DC: U.S. Department of Labor. Campbell, John Y., and Robert J. Shiller. 1998. Valuation ratios and the long-run stock market outlook. Journal of Portfolio Management 24 (2): 11–26. Christensen, Laurits R., and Dale W. Jorgenson. 1995. Measuring economic performance in the private sector. In Postwar U.S. economic growth, ed. Dale W. Jorgenson, 175–272. Cambridge: MIT Press. Congressional Budget Office. 2002. The budget and economic outlook: Fiscal years 2003–2012. Washington, DC: Government Printing Office, January. Council of Economic Advisors. 2002. Economic report of the president. Washington, DC: U.S. Government Printing Office, February. Corrado, Carol, and Lawrence Slifman. 1999. Decomposition of productivity and unit costs. American Economic Review, Papers and Proceedings 89:328–32. Domar, Evsey. 1961. On the measurement of technological change. Economic Journal 71:709–29. Dulberger, Ellen R. 1993. Sources of decline in computer processors: Selected electronic components. In Price measurements and their uses, ed. Murray F. Foss, Marilyn Manser, and Allan H. Young, 103–24. Chicago: University of Chicago Press. Federal Reserve Board. 1995. Balance sheets for the U.S. economy. Release C.9. Washington, DC: Federal Reserve Board, June. ———. 1997. Flow of funds accounts of the United States. Release Z.1. Washington, DC: Federal Reserve Board. Fraumeni, Barbara. 1997. The measurement of depreciation in the U.S. national income and product accounts. Survey of Current Business 77 (7): 7–23. Gordon, Robert J. 2002. Technology and economic performance in the U.S. economy. NBER Working Paper no. 8771. Cambridge, MA: National Bureau of Economic Research, February.
Investments in Information Technology and Higher Education
471
Greenwood, Jeremy, Zvi Hercowitz, and Per Krusell. 1997. Long-run implications of investment-specific technological change. American Economic Review 87:342– 63. ———. 2000. The role of investment-specific technological change in the business cycle. European Economic Review 44:91–115. Gullickson, William, and Michael J. Harper. 1999. Possible measurement bias in aggregate productivity growth. Monthly Labor Review 122 (2): 47–67. Hall, Robert E. 1988. The relationship between price and marginal cost in U.S. industry. Journal of Political Economy 96:921–47. ———. 2001. The stock market and capital accumulation. American Economic Review 91:1185–1202. Hall, Robert E., and Dale W. Jorgenson. 1996. Tax policy and investment behavior. In Tax policy and the cost of capital, ed. Dale W. Jorgenson, 1–26. Cambridge: MIT Press. Hercowitz, Zvi. 1998. The “embodiment” controversy: A review essay. Journal of Monetary Economics 41:217–24. Herman, Shelby W. 2000a. Fixed assets and consumer durable goods. Survey of Current Business 80 (4): 17–30. ———. 2000b. Fixed assets and consumer durable goods for 1925–99. Survey of Current Business 80 (9): 19–30. Ho, Mun Sing, and Dale W. Jorgenson. 1999. The quality of the U.S. workforce, 1948–95. Harvard University, Department of Economics. Manuscript. Ho, Mun Sing, and Kevin J. Stiroh. 2001. The embodiment controversy: You can’t have two prices in a one-sector model. Federal Reserve Bank of New York. Manuscript. Jaeger, David A. 1997. Reconciling the old and new Census Bureau education questions: Recommendations for researchers. Journal of Business and Economic Statistics 15:300–309. Jorgenson, Dale W. 1995a. The embodiment hypothesis. In Postwar U.S. economic growth, 25–50. Cambridge: MIT Press. ———. 1995b. Productivity and economic growth. In International comparisons of economic growth, 1–88. Cambridge: MIT Press. ———. 1996. Capital theory and investment behavior. In Capital theory and investment behavior, 1–16. Cambridge: MIT Press. ———. 2001. Information technology and the U.S. economy. American Economic Review 91:1–32. Jorgenson, Dale W., Frank M. Gollop, and Barbara M. Fraumeni. 1987. Productivity and U.S. economic growth. Cambridge: Harvard University Press. Jorgenson, Dale W., and Zvi Griliches. 1995. The explanation of productivity change. In Postwar U.S. economic growth, ed. Dale W. Jorgenson, 51–98. Cambridge: MIT Press. Jorgenson, Dale W., Mun S. Ho, and Kevin J. Stiroh. 2002. Projecting productivity growth: Lessons from the U.S. growth resurgence. Federal Reserve Bank of Atlanta Economic Review 87 (3): 1–13. Jorgenson, Dale W., and Kevin J. Stiroh. 2000. Raising the speed limit: U.S. economic growth in the information age. Brookings Papers on Economic Activity, Issue no. 1:125–211. Jorgenson, Dale W., and Kun-Young Yun. 2001. Lifting the burden: Tax reform, the cost of capital, and U.S. economic growth. Cambridge: MIT Press. Kroch, Eugene A., and Kriss Sjoblom. 1994. Schooling as human capital or a signal. Journal of Human Resources 29:156–80. Lane, Richard N. 1999. Appraisal report “large aerospace firm” personal property, Los Angeles County, March 1, 1995. Revised February 1.
472
Dale W. Jorgenson, Mun S. Ho, and Kevin J. Stiroh
Lum, Sherlene K. S., and Brian C. Moyer. 2000. Gross domestic product by industry for 1997–99. Survey of Current Business 80 (12): 24–35. McKinsey Global Institute. 2001. U.S. productivity growth, 1995–2000: Understanding the contribution of information technology relative to other factors. Washington, DC: McKinsey Global Institute, October. Oliner, Stephen D. 1993. Constant-quality price change, depreciation, and the retirement of mainframe computers. In Price measurements and their uses, ed. Murray F. Foss, Marilyn E. Manser, and Allan H. Young, 19–61. Chicago: University of Chicago Press. ———. 1994. Measuring stocks of computer peripheral equipment: Theory and application. Washington, DC: Board of Governors of the Federal Reserve System, May. Oliner, Stephen D., and Daniel E. Sichel. 2000. The resurgence of growth in the late 1990s: Is information technology the story? Journal of Economic Perspectives 14:3–22. ———. 2002. Information technology and productivity: Where are we now and where are we going? Federal Reserve Bank of Atlanta Economic Review 87 (3): 15– 44. Schreyer, Paul. 2001. OECD productivity manual: A guide to the measurement of industry-level and aggregate productivity growth. Paris: Organization for Economic Cooperation and Development, March. Shiller, Robert. 2000. Irrational exuberance. Princeton, NJ: Princeton University Press. Solow, Robert M. 1960. Investment and technical progress. In Mathematical methods in the social sciences, 1959, ed. Kenneth J. Arrow, Samuel Karlin, and Patrick Suppes, 89–104. Stanford, CA: Stanford University Press. Spence, A. Michael. 1973. Job market signaling. Quarterly Journal of Economics 87:355–74. Stiroh, Kevin J. 2002. Information technology and the U.S. productivity revival: What do the industry data say? American Economic Review 92 (5): 1559–76. Triplett, Jack E. 1996. High-tech industry productivity and hedonic price indices. In Industry productivity, international comparison, and measurement issues, 119– 42. Paris: Organization for Economic Cooperation and Development. Unicon Research Corporation. 2002. CPS utilities. Santa Monica, CA: Unicon Research Corporation. United Nations. 1993. System of national accounts 1993. New York: United Nations. Whelan, Karl. 2002. Computers, obsolescence, and productivity. Review of Economics and Statistics 84:445–62.
Comment
Michael J. Harper
This paper by Dale Jorgenson, Mun Ho, and Kevin Stiroh (hereafter JHS) is the latest in a thirty-six-year series of major empirical studies of U.S. ecoThe author is chief of the Division of Productivity Research and Program Development of the U.S. Bureau of Labor Statistics and a member of the executive committee of the Conference on Research in Income and Wealth. The views expressed here are those of the author and do not necessarily reflect the views of the Bureau of Labor Statistics or its staff.
Investments in Information Technology and Higher Education
473
nomic growth conducted by Dale Jorgenson and various coauthors. It has its roots in Jorgenson’s paper with Zvi Griliches (1967), in which most of the distinctive methodological features of this series of studies were introduced. While there are a few refinements in this paper, the methodology for measuring industry-level outputs and inputs is essentially the same as that used in the book by Jorgenson, Frank Gollop, and Barbara Fraumeni (1987; hereafter JGF). This framework has stood the test of time and remains the state of the art in accounting for economic growth. At the Bureau of Labor Statistics (BLS) we institutionalized many of the distinctive features of the input side of this framework in our “multifactor productivity measures” (BLS 1983), and our colleagues at the Bureau of Economic Analysis (summarized by Steve Landefeld and Robert Parker [1995]) have adopted its approach to measuring output with a chained superlative index number. This tradition continues to flourish. Within the past few years Australia and Canada have published productivity measures using this framework, and Great Britain is working on similar measures. So the absence of major methodological innovation does not detract from the importance of this particular study. To the contrary, it is very important to look at newly emerging issues in the economy, like the high-tech revolution, through the lens of measurement methods that were established and understood a priori. It is fortunate indeed that Jorgenson continues to guide the tradition through his involvement in new empirical work. It is with admiration and respect for the continuity of an important tradition that people have labeled these studies the “Jorgenson Statistical Agency.” Why has this particular tradition maintained its momentum? It is because it devises a set of national accounts and productivity statistics that are built on neoclassical principles. By this I mean that the methodological choices flow from mathematical models of production and from assumptions of efficient economic behavior. These models drive the notion of capital “services” (as distinct from capital stock), the “Translog” aggregation technique, the “quality” adjustments to inputs and outputs, the treatment of intermediate inputs, and even the choice of the scope of the economy. I will comment on some of these major methodological features before offering a few thoughts on the results. But first we should recall the state of statistics back around the 1960s. The Conference on Research in Income and Wealth had played a major role in shaping the U.S. National Income and Product Accounts (NIPAs) as celebrated in the fiftieth-anniversary volume edited by Berndt and Triplett (1990). The NIPAs were crafted to measure the “final” goods and services created by the economy. This was of course a key concept in the Keynesian model. Many of the current measurement programs of U.S. statistical agencies emerged during the 1940s, 1950s, and 1960s to support improved measurement of real final production. The strategy was to tabulate
474
Dale W. Jorgenson, Mun S. Ho, and Kevin J. Stiroh
nominal market transactions, remove any that represented unfinished goods or services moving between stages of processing, and then express the result in real terms. These measures of real gross domestic product (GDP, although back then gross national product was emphasized) fast became the basis for macro models, models of inflation, and aggregate productivity statistics. A problem was that many strong assumptions were being built very deeply into the structure of the statistical edifice. The literature of the 1950s, 1960s, and 1970s focused on these assumptions, and many methods emerged that reduced the degree of restrictiveness. The Jorgenson series of empirical papers has restructured and recalculated the aggregate measures using the more flexible methods. In the 1950s, the premier shortcoming of national accounts was the lack of a capital measure. National accountants treated capital goods as a “final output,” as Keynes had dictated, but neglected the very important role of capital as an input. Since it was recognized that capital goods were not ultimately consumed, national accountants developed “net national product” to measure economic growth net of depreciation. The removal of depreciation mirrored the removal of sales of unfinished goods. However, this treatment oversimplifies the dynamic role that capital plays in production and in economic growth. After several years of debate in the literature that probed the capital measurement issue, Solow (1957) proposed a model that treated capital as an input that contributed to production and to labor productivity. The weight for capital’s contribution to labor productivity, capital’s cost share, emerged from assuming competitive markets for inputs. The residual of capital’s contribution to labor productivity growth is known as “total factor” or “multifactor” productivity. Early on, there was resistance to the Solow model, with many authors pointing to internal inconsistencies in Solow’s capital measure. The Jorgenson and Griliches (1967) article (hereafter JG) was the first major empirical study to resolve an important inconsistency concerning the aggregation of equipment and buildings. JG elaborated Solow’s “production theory” to handle a situation where there are different types of capital by treating each input type as a separate category (each a separate variable in a production function). Capital inputs were viewed as flows of services that could be different for each type of capital. The prices of renting these services, rather than the prices of buying the capital goods, were identified by JG as the relevant prices for employing capital inputs in a given year’s production. The property income earned by capital assets was the relevant compensation of capital inputs. This methodology eliminated the strong assumption involved when capital stocks (for example, stocks of trucks and stocks of buildings) were simply added together. This technique has been expanded here by JHS to elucidate the role of computers in the recent productivity growth surge.
Investments in Information Technology and Higher Education
475
The JG article pushed the aggregate accounting frontier forward in two other important ways: different types of labor were treated as separate inputs, and a flexible index number formula was used for all aggregation. This “translog” production function, which was later estimated econometrically by Christensen, Jorgenson, and Lau (1973), was a very general specification that placed no a priori restrictions on the elasticities of substitution among inputs. The index number formula used by JG was related to the translog. It was a decade later that Diewert (1976) showed that many index number formulas correspond to particular specifications of production functions and production frontiers. Diewert pointed out that the translog index corresponded to an index number formula proposed earlier by Tornqvist. The Tornqvist and the Fisher Ideal index formulas were dubbed “superlative” by Diewert, meaning that they could be derived from flexible functional specifications. The JG empirical deployment of flexible index number formulas was decades ahead of its time. In 1967, real GDP in the U.S. national accounts was constructed by adding up “constant base-period dollars.” Constant dollars were simple and provided a good approximation to the superlative formulas, as long as relative prices remained stable. After sixteen years and some intensive research, the BLS (1983) introduced business-sector multifactor productivity measures that used superlative formulas to aggregate inputs. By the mid-1980s, computers, with their rapidly declining prices, upset the ability of constant-dollar aggregates of GDP to stably track real economic growth. After extensive research the BEA introduced superlative formulas into the official computation of GDP (Landefeld and Parker 1995). But the JHS paper is the first empirical study of growth, since the computer revolution of the late 1990s, to cover the entire U.S. economy and to use a full tool kit of neoclassical techniques. JHS delves, further than earlier published materials, into what must be true about the formal structures of production of the various industries in order to build up aggregate measures that meaningfully summarize the economy. The formal assumptions are certainly strong, even when we employ flexible index number formulas to do all of the aggregations. Potential bias is introduced by chaining together time series of index numbers and of contributions, and so the onehundredth percentage point precision with which JHS present their results may overstate the accuracy. Having said that, I will side with Samuelson and Solow that it is worthwhile to build aggregates, and I will side with the authors on using the flexible formulas to do so. The authors try to rigorously account for intermediate inputs in real terms. They treat a commodity produced by one industry and consumed by a second industry as an output of the first industry and an input into the second. These intermediate items are then systematically excluded from outputs and inputs in aggregating up measures of total economy produc-
476
Dale W. Jorgenson, Mun S. Ho, and Kevin J. Stiroh
tivity, using Evsey Domar’s (1961) model and superlative index techniques. In order to ensure that this “real production account” is internally consistent, the authors build their real measures up from deflated time series of nominal input-output tables. The only consistent time series of tables available is that produced by the BLS Division of Industry Employment Projections (DIEP). While consistent over time in their structure, these tables are derived from the detailed information in only a single benchmark input-output table (as published by BEA—the 1992 table is the basis for this study). Now the advantage of the flexible aggregation employed by JHS is that it places no a priori restrictions on the structure of production. The DIEP series of tables are constructed by making elegant use of the assumption of fixed input requirements, so the resulting tables do not fully reflect the true flexibility of substitution that the JHS model could accommodate. This was equally a limitation of the production accounting efforts by Gullickson and me (Gullickson and Harper 1999, 2002). The Jorgenson measurement tradition would regard each input-output table as a snapshot of the outcome of the many markets in the dynamic economy. Future efforts at production accounting would best be based on time series of tables built from the actual detailed information in historical benchmark input-output tables. It will be necessary to go back in time and rearrange the information in the tables to be consistent over time, in terms of definitions and categories. The BEA and BLS plan to investigate the feasibility and efficacy of an effort to construct such a real production account. The JHS paper draws special attention to its measures of labor input and labor “quality,” noting that employment of lower-wage workers was unusually high during the late 1990s. This development partially offsets the steady upgrading of the average skills of the labor force, which has been an important source of post–World War II U.S. economic growth, and estimating the size of the contribution is one of the hallmarks of Jorgenson’s series of studies. I do want to quibble with the JHS semantics, while emphasizing my approval of the substance of what is done. The term quality conventionally refers to the “characteristics” or to the “degree of excellence” of a member of some specific group. Examples of higher-quality members would be better-trained doctors or better computers. In JHS, labor quality improvements would include employment shifts from janitors to doctors, while capital quality improvements would include investment shifts from office buildings to computers. The BLS (1983, 2002) has substituted the term composition change for quality change to avoid any needless connotation about a worker’s intrinsic worth. These authors are certainly entitled to define their own terms, but the reader must be careful not to confuse quality in the broad sense of JHS with the more narrow kind of quality exemplified by the increased speed and capacity of computers. The JHS conclusions emphasize the large role of computers in the late 1990s, a result that reflects large computer quality adjustments (of the
Investments in Information Technology and Higher Education
477
more narrow type). The last column of table 11.2 compares growth rates (and contributions to growth rates) for 1995–2000 to 1990–95. Labor productivity accelerated by .93 percentage points, and JHS attribute the sources of this as follows: IT capital deepening (information processing equipment and software), .51; IT industry total factor productivity (TFP), .11; all other industries’ TFP, .17; all other measured sources of compositional change (including non-IT capital deepening, labor quality, and industry composition effects), .15. Thus, according to the authors, the production and use of IT goods accounts for about two-thirds of the labor productivity speedup in the late 1990s. The general finding that IT has been important certainly seems plausible. The BLS (2002) also finds a large role for IT. Comparing 1995–2000 to 1990–95, the BLS found that IT capital deepening accounted for about 0.5 percentage points of a 1.2 percent speedup in private business labor productivity, and my calculations indicate that productivity gains in ITproducing industries accounted for an additional 0.2 points. We all know of many examples of how computers have improved the quality and variety of goods and services available as well as examples of how computers allow workers to accomplish tasks much more efficiently. Both the IT deepening and IT multifactor productivity results are consequences of quality adjustments made by U.S. statistical agencies that use market prices (rather than rental prices) to account for the increased speed and capacity of computers. These adjustments are embedded in the real IT investment data used by JHS. But the finding that TFP acceleration in all non-IT industries accounts for only .17 percentage points of the labor productivity acceleration is a bit puzzling. It seems to say that non-IT industries have done little more than purchase the ever-better IT equipment and plug it into their production processes. Surely these processes have undergone substantial redesign, and surely the costs, in terms of non-IT inputs, have been substantial. Why don’t JHS find more acceleration in non-IT TFP? Two possible explanations deserve emphasis. First, the non-IT redesign costs incurred by downstream industries may constitute intangible investments, and these go unmeasured. If intangible investments accelerated in the late 1990s, we should expect to see an increased contribution to labor productivity by non-IT TFP in the future. This underscores the importance of the goal of this volume—intangible investments are a source of growth that has eluded measurement. Second, statistical agency measurement methods capture more quality change in goods than in services, and there is potentially a bias. The IT quality adjustments, in which most U.S. economists express great confidence, are enormous (persistent double-digit annual rates), while quality adjustments to services are difficult to measure and often neglected. I remain concerned with an issue raised recently by Gullickson and me (Gullickson and Harper 2002). A tilt in the “measurement
478
Dale W. Jorgenson, Mun S. Ho, and Kevin J. Stiroh
playing field” could lead to a misallocation of industry contributions to aggregate productivity. Quality adjustment deserves continued emphasis and continued scrutiny, because our fundamental understanding of the sources of economic growth is at stake. References Berndt, Ernst R., and Jack E. Triplett, eds. 1990. Fifty years of economic measurement: The jubilee of the Conference on Research in Income and Wealth. Chicago: University of Chicago Press. Bureau of Labor Statistics (BLS). 1983. Trends in multifactor productivity trends, 1948–81. Bulletin no. 2178. Washington, DC: U.S. Government Printing Office. ———. 2002. Multifactor productivity trends, 1999. News Release no. 02-128. Washington, DC: U.S. Department of Labor, March 12. Christensen, Laurits R., Dale W. Jorgenson, and Lawrence J. Lau. 1973. Transcendental logarithmic production functions. Review of Economics and Statistics 55 (February): 28–45. Diewert, W. Erwin. 1976. Exact and superlative index numbers. Journal of Econometrics 4 (4): 115–45. Domar, Evsey D. 1961. On the measurement of technological change. Economic Journal 71 (284): 709–29. Gullickson, William, and Michael J. Harper. 1999. Possible measurement bias in aggregate productivity growth. Monthly Labor Review, February, 47–67. ———. 2002. Bias in aggregate productivity trends revisited. Monthly Labor Review, February, pp. 32–40. Jorgenson, Dale, Frank M. Gollop, and Barbara M. Fraumeni. 1987. Productivity and U.S. economic growth. Cambridge, MA: Harvard University Press. Jorgenson, Dale, and Zvi Griliches. 1967. The explanation of productivity change. Review of Economic Studies 34 (3): 249–82. Landefeld, J. Steven, and Robert P. Parker. 1995. Preview of the comprehensive revision of the national income and product accounts: BEA’s new featured measures of output and prices. Survey of Current Business, July, pp. 31–38. Solow, Robert M. 1957. Technical change and the aggregate production function. Review of Economics and Statistics 39 (3): 312–20.
12 Issues in the Measurement of Capital Services, Depreciation, Asset Price Changes, and Interest Rates W. Erwin Diewert
12.1 Introduction In this chapter, we discuss some of the problems involved in constructing price and quantity series for both capital stocks and the associated flows of services when there are general and asset-specific price changes in the economy.1 In section 12.2, we present the basic equations relating stocks and flows of capital assuming that data on the prices of vintages of a homogeneous capital good are available. This framework is not applicable under all circumstances, but it is a framework that will allow us to disentangle the effects of general price change, asset-specific price change, and depreciation. Section 12.3 continues the theoretical framework that was introduced in section 12.2. We show how information on vintage asset prices, vintage rental prices, and vintage depreciation rates are all equivalent under certain assumptions; that is, knowledge of any one of these three sequences or profiles is sufficient to determine the other two. Section 12.4 discusses alternative sets of assumptions on nominal interest rates and anticipated asset price changes. We specify three different sets W. Erwin Diewert is professor of economics at the University of British Columbia and a research associate of the National Bureau of Economic Research. The author is indebted to Carol Corrado, Kevin Fox, John Haltiwanger, Peter Hill, Ning Huang, Ulrich Kohli, Alice Nakamura, Paul Schreyer, Dan Sichel, and Frank Wykoff for helpful comments. This research was supported by a SSHRC research grant. None of the parties acknowledged here are responsible for any errors or opinions expressed in the paper. A longer version of the present paper, including the data used, is available as Department of Economics Discussion Paper 04-10, University of British Columbia, Vancouver, Canada. 1. We cover some of the same issues discussed in the recent paper by R. J. Hill and Hill (2003). However, Hill and Hill did not deal with the problems associated with adjusting nominal interest rates for general inflation.
479
480
W. Erwin Diewert
of assumptions that we will use in our empirical illustration of the suggested methods. Section 12.5 discusses the significance of our assumptions made in the previous section and relates them to controversies in national income accounting. In particular, we discuss whether anticipated asset price decline should be an element of depreciation as understood by national income accountants. Section 12.6 discusses the problems involved in aggregating over vintages of capital, both in forming capital stocks and capital services. Instead of the usual perpetual inventory method for aggregating over vintages, which assumes perfectly substitutable vintages of the same stock, we suggest the use of a superlative index number formula to do the aggregation. Sections 12.7 to 12.10 show how the general algebra presented in sections 12.2 and 12.3 can be adapted to deal with four specific models of depreciation. The four models considered are the one-hoss-shay model, the straight-line depreciation model, the geometric model of depreciation, and the linear efficiency decline model. In section 12.11, we show how these models differ empirically by computing the corresponding stocks and flows using Canadian data on two asset classes. The details of the computations and the data used may be found in Diewert (2004). Section 12.12 shows how our framework can be modified to model the treatment of some forms of intangible capital, such as investments in research and development. Section 12.13 concludes with some observations on how statistical agencies might be able to use the material presented in this chapter. 12.2 The Fundamental Equations Relating Stocks and Flows of Capital Before we begin with our algebra, it seems appropriate to explain why accounting for the contribution of capital to production is more difficult than accounting for the contributions of labor or materials. The main problem is that when a reproducible capital input is purchased for use by a production unit at the beginning of an accounting period, we cannot simply charge the entire purchase cost to the period of purchase. Since the benefits of using the capital asset extend over more than one period, the initial purchase cost must be distributed somehow over the useful life of the asset. This is the fundamental problem of accounting. In a noninflationary environment, the value of an asset at the beginning of an accounting period is equal to the discounted stream of future rental payments that the asset is expected to yield. Thus, the stock value of the asset is equal to the discounted future service flows 2 that the asset is expected 2. Walras (1954; first edition published in 1874) was one of the earliest economists to state that capital stocks are demanded because of the future flow of services that they render. Al-
Measurement of Capital Services and Asset Price Changes
481
to yield in future periods. Let the price of a new capital input purchased at the beginning of period t be P t0 . In a noninflationary environment, it can be assumed that the (potentially observable) sequence of (cross-sectional) vintage rental prices prevailing at the beginning of period t can be expected to prevail in future periods. Thus, in this no-general-inflation case, there is no need to have a separate notation for future expected rental prices for a new asset as it ages. However, in an inflationary environment, it is necessary to distinguish between the observable rental prices for the asset at different ages at the beginning of period t and future expected rental prices for assets of various ages.3 Thus let f 0t be the (observable) rental price of a new asset at the beginning of period t, let f 1t be the (observable) rental price of a one-period-old asset at the beginning of period t, let f 2t be the (observable) rental price of a two-period-old asset at the beginning of period t, and so on. Then the fundamental equation relating the stock value of a new asset at the beginning of period t, P t0 to the sequence of cross-sectional rental prices for assets of age n prevailing at the beginning of period t, { f nt ; n 0, 1, 2, . . .}4 is (1)
1 i t1 (1 i t1 )(1 i t2 ) t P t0 f t0 t f 1t f ... 1 r1 (1 r t1)(1 r t2) 2
In the above equation, 1 i t1 is the rental price escalation factor that is expected to apply to a one-period-old asset going from the beginning of period t to the end of period t (or, equivalently, to the beginning of period t 1), (1 i t1)(1 i t2 ) is the rental price escalation factor that is expected to apply to a two-period-old asset going from the beginning of period t to the beginning of period t 2, etc. Thus, the i tn are expected rates of price change for used assets of varying ages n that are formed at the beginning of period t. The term 1 r t1 is the discount factor that makes a dollar received at the beginning of period t equivalent to a dollar received at the beginning of period t 1, the term (1 r t1 )(1 r t2 ) is the discount factor that makes a dollar received at the beginning of period t equivalent to a dollar received at the beginning of period t 2, and so on. Thus, the r tn are one-period nominal interest rates that represent the term structure of interest rates at the beginning of period t.5 though he was perhaps the first economist to formally derive a user cost formula, as we shall see, he did not work out the explicit discounting formula that Böhm-Bawerk (1891, p. 342) was able to derive. 3. Note that these future expected rental prices are not generally observable due to the lack of futures markets for these future period rentals of the assets of varying ages. 4. The sequence of (cross-sectional) vintage rental prices ( f tn ) is called the age-efficiency profile of the asset. 5. Peter Hill has noted a major problem with the use of equation (1) as the starting point of our discussion: namely, unique assets will by definition not have used versions of the same asset in the marketplace during the current period, and so the cross-sectional rental prices f tn for assets of age n in period t will not exist for these assets! In this case, the f tn should be
482
W. Erwin Diewert
We now generalize equation (1) to relate the stock value of an n-periodold asset at the beginning of period t, P nt , to the sequence of cross-sectional vintage rental prices prevailing at the beginning of period t, { f nt }; thus, for n 0, 1, 2, . . . , we assume (2)
(1 i t1)(1 i t2 ) t 1 i t1 t P tn f nt t f n1 f ... 1 r1 (1 r t1)(1 r t2 ) n2
Thus, older assets discount fewer terms in the above sum; thus, as n increases by one, we have one less term on the right-hand side of equation (2). However, note that we are applying the same price escalation factors (1 i t1), (1 i t1)(1 i t2), . . . , to escalate the cross-sectional rental prices prevailing at the beginning of period t, f 1t, f 2t, . . . , and to form estimates of future expected rental prices for each vintage of the capital stock that is in use at the beginning of period t. The rental prices prevailing at the beginning of period t for assets of various ages, f t0, f t1, . . . are potentially observable.6 These cross-section rental prices reflect the relative efficiency of the various vintages of the capital good that are still in use at the beginning of period t. It is assumed that these rentals are paid (explicitly or implicitly) by the users at the beginning of period t. Note that the sequence of asset stock prices for various ages at the beginning of period t, P t0, P t1, . . . is not affected by general inflation provided that the general inflation affects the expected asset rates of price change i tn and the nominal interest rates r tn in a proportional manner. We will return to this point later. The physical productivity characteristics of a unit of capital of each age are determined by the sequence of cross-sectional rental prices. Thus, a brand-new asset is characterized by the vector of current rental prices for assets of various ages, f t0, f t1, f t2, . . . , which are interpreted as “physical” contributions to output that the new asset is expected to yield during the current period t (this is f t0 ), the next period (this is f t1), and so on. An asset that is one period old at the start of period t is characterized by the vector f t1, f t2, . . . , and so on.7 We have not explained how the expected rental price rates of price interpreted as expected future rentals that the unique asset is expected to generate at today’s prices. The (1 i tn ) terms then summarize expectations about the amount of asset-specific price change that is expected to take place. This reinterpretation of equation (1) is more fundamental, but we chose not to make it our starting point because it does not lead to a completely objective method for national statisticians to form reproducible estimates of these future rental payments. However, in many situations (e.g., the valuation of a new movie), the statistician will be forced to attempt to implement P. Hill’s (2000) more general model. In section 12.12 below, we apply a variant of the expected rentals interpretation of our equations to value intangible capital. 6. This is the main reason that we use this escalation of cross-sectional rental prices approach to capital measurement rather than the more fundamental discounted future expected rentals approach advocated by Hill. 7. Triplett (1996, p. 97) used this characterization for capital assets of various vintages.
Measurement of Capital Services and Asset Price Changes
483
change i tn are to be estimated. We shall deal with this problem in section 12.4. However, it should be noted that there is no guarantee that our expectations about the future course of rental prices are correct. At this point, we make some simplifying assumptions about the expected rates of rental price change for future periods itn and the interest rates r tn. We assume that these anticipated specific price change escalation factors at the beginning of each period t are all equal; that is, we assume i tn i t ;
(3)
n 1, 2, . . .
We also assume that the term structure of (nominal) interest rates at the beginning of each period t is constant; that is, we assume r tn r t ;
(4)
n 1, 2, . . .
However, note that as the period t changes, r t and i t can change. Using assumptions (3) and (4), we can rewrite equation (2), which relates the sequence or profile of stock prices of age n at the beginning of period t {P tn} to the sequence or profile of (cross-sectional) rental prices for assets of age n at the beginning of period t { f tn }, as follows: (5)
1i 1i 1i P f f f f . . . 1r 1r 1r 1i 1i 1i P f f f f . . . 1r 1r 1r 1i 1i 1i P f f f f ... 1r 1r 1r
1 it 1 it 2 1 it 3 P t0 f t0 t f 1t t f t2 t f t3 . . . 1r 1r 1r t
t 1
t 1
t 2
t 2
t n
t n
t
t
t 2
t
t
t
t
t
t
t 3
2
2
t
t
t n1
t
t
t 3
t
t 4
2
3
t
t
3
t 4
t 5
t
t n2
t
3
t n3
On the left-hand side of equations (5), we have the sequence of period-t asset prices by age starting with the price of a new asset, P t0, moving to the price of an asset that is one period old at the start of period t, P t1, then moving to the price of an asset that is two periods old at the start of period t, P t2, and so on. On the right-hand side of equations (5), the first term in each equation is a member of the sequence of rental prices by age of asset that prevails in the market (if such markets exist) at the beginning of period t. Thus, f t0 is the rent for a new asset, f t1 is the rent for an asset that is one period old at the beginning of period t, f t2 is the rent for an asset that is two periods old, and so on. This sequence of current market rental prices for the assets of various vintages is then extrapolated out into the future using the anticipated price escalation rates (1 i t ), (1 i t )2, (1 i t)3, and so on, and then these future expected rentals are discounted back to the beginning of period t using the nominal discount factors (1 r t ), (1 r t )2,
484
W. Erwin Diewert
(1 r t )3, and so on. Note that given the period-t expected asset inflation rate i t and the period-t nominal discount rate r t, we can go from the (crosssectional) sequence of vintage rental prices { f tn} to the (cross-sectional) sequence of vintage asset prices {P tn} using equations (5). We shall show below how this procedure can be reversed; that is, we shall show how, given the sequence of cross-sectional asset prices, we can construct estimates for the sequence of cross-sectional rental prices. Böhm-Bawerk (1891, p. 342) considered a special case of equations (5) where all service flows fn were equal to 100 for n 0, 1, . . . , 6 and equal to 0 thereafter, where the asset inflation rate was expected to be 0 and where the interest rate r was equal to .05 or 5 percent.8 This is a special case of what has come to be known as the one-hoss-shay model, and we shall consider it in more detail in section 12.7. Note that equations (5) can be rewritten as follows:9 (6)
1i P ; 1r 1i P ; 1r
1 it P t0 f t0 t P t1; 1r t
P t1 f t1
t 2
t
t
P t2 f t2
t 3
t
...
1 it P tn f tn t P tn1; . . . 1r The first equation in equations (6) says that the value of a new asset at the start of period t, P t0, is equal to the rental that the asset can earn in period t, f t0,10 plus the expected asset value of the capital good at the end of period t, (1 i t ) P t1, but this expected asset value must be divided by the discount factor, (1 r t ), in order to convert this future value into an equivalent beginning-of-period-t value.11 8. Böhm-Bawerk (1891, p. 343) went on to construct the sequence of vintage asset prices using his special case of equations (5). 9. Christensen and Jorgenson (1969, p. 302) do this for the geometric depreciation model, except that they assume that the rental is paid at the end of the period rather than the beginning. Variants of the system of equations (6) were derived by Christensen and Jorgenson (1973), Jorgenson (1989, p. 10), Hulten (1990, p. 128) and Diewert and Lawrence (2000, p. 276). Irving Fisher (1908, pp. 32–33) also derived these equations in words. 10. Note that we are implicitly assuming that the rental is paid to the owner at the beginning of period t. 11. Another way of interpreting say the first equation in equations (6) runs as follows: the purchase cost of a new asset Pt0 less the rental ft0 (which is paid immediately at the beginning of period t) can be regarded as an investment, which must earn the going rate of return r t. Thus, we must have (Pt0 – ft0) (1 r t ) (1 i t )Pt1, which is the (expected) value of the asset at the end of period t. This line of reasoning can be traced back to Walras (1954, p. 267).
Measurement of Capital Services and Asset Price Changes
485
Now it is straightforward to solve equations (6) for the sequence of period-t cross-sectional rental prices, { f tn}, in terms of the cross-sectional asset prices, {P tn}:
1i P P (1 r ) 1r 1i P P (1 r ) 1r
1 it (7) f t0 P t0 t P t1 (1 r t)1[P t0(1 r t) (1 i t)P t1 ] 1r t
f t1
t 1
t
[P t1(1 r t) (1 i t)Pt2 ]
t 2
t 1
t 3
t 1
t
f t2
t 2
t
[P t2(1 r t) (1 i t)P t3 ]
...
1i f tn P nt t P tn1 (1 r t)1[P tn(1 r t) (1 i t)P tn1 ]; . . . 1r t
Thus, equations (5) allow us to go from the sequence of rental prices by age n{ f tn} to the sequence of asset prices by age n{P tn}, while equations (7) allow us to reverse the process. Equations (7) can be derived from elementary economic considerations. Consider the first equation in equations (7). Think of a production unit as purchasing a unit of the new capital asset at the beginning of period t at a cost of P 0t and then using the asset throughout period t. However, at the end of period t, the producer will have a depreciated asset that is expected to be worth (1 i t ) P 1t . Since this offset to the initial cost of the asset will only be received at the end of period t, it must be divided by (1 r t ) to express the benefit in terms of beginning-of-period-t dollars. Thus the expected net cost12 of using the new asset for period t is P 0t – [(1 i t )/(1 r t )] P 1t . The above equations assume that the actual or implicit period t rental payments f tn for assets of different ages n are made at the beginning of period t. It is sometimes convenient to assume that the rental payments are made at the end of each accounting period. Thus, we define the endof-period-t rental price or user cost for an asset that is n periods old at the beginning of period t, u tn , in terms of the corresponding beginning-ofperiod-t rental price f tn as follows: (8)
u tn (1 r t) f nt ;
n 0, 1, 2, . . .
Thus, if the rental payment is made at the end of the period instead of the beginning, then the beginning of the period rental f tn must be escalated by the interest rate factor (1 r t ) in order to obtain the end-of-period user cost u tn. 12. This explains why the rental prices ftn are sometimes called user costs. This derivation of a user cost was used by Diewert (1974, p. 504; 1980, pp. 472–73; 1992b, p. 194) and by Hulten (1996, p. 155).
486
W. Erwin Diewert
Using equations (8) and the second set of equations in (7), it can readily be shown that the sequence of end-of-period-t user costs {u nt } can be defined in terms of the period-t sequence of asset prices by age {P nt } as follows: (9)
u t0 P t0(1 r t ) (1 i t )P t1 u t1 P t1(1 r t ) (1 i t )P t2 u t2 P t2(1 r t ) (1 i t )P t3 ... u P (1 r ) (1 i t )P tn1; . . . t n
t n
t
Equations (9) can also be given a direct economic interpretation. Consider the following explanation for the user cost for a new asset, u t0. At the end of period t, the business unit expects to have an asset worth (1 i t )P t1. Offsetting this benefit is the beginning-of-period asset purchase cost, P t0. However, in addition to this cost, the business must charge itself either the explicit interest cost that occurs if money is borrowed to purchase the asset or the implicit opportunity cost of the equity capital that is tied up in the purchase. Thus, offsetting the end-of-period benefit (1 i t )P t1 is the initial purchase cost and opportunity interest cost of the asset purchase, P t0(1 r t ), leading to an end-of-period-t net cost of P t0(1 r t ) – (1 i t )P t1 or u t0. It is interesting to note that in both the accounting and financial management literature of the past century, there was a reluctance to treat the opportunity cost of equity capital tied up in capital inputs as a genuine cost of production.13 However, more recently there is an acceptance of an imputed interest charge for equity capital as a genuine cost of production.14 In the following section, we will relate the asset price profiles {P tn} and the user cost profiles {u tn} to depreciation profiles. However, before turning to the subject of depreciation, it is important to stress that the analysis presented in this section is based on a number of restrictive assumptions, particularly on future price expectations. Moreover, we have not explained how these asset price expectations are formed, and we have not explained how the period-t nominal interest rate is to be estimated (we will address these topics in section 12.7). We have not explained what should be done if the sequence of secondhand asset prices {P tn} is not available and the sequences of vintage rental prices or user costs, { f tn} or {u tn}, are also not available (we will address this problem in later sections as well). We have also assumed that asset values and user costs are independent of how intensively the assets are used. Finally, we have not modeled uncertainty (about future prices and the useful lives of assets) and attitudes toward risk 13. This literature is reviewed in Diewert and Fox (1999, pp. 271–74). 14. Stern Stewart and Co. has popularized the idea of charging for the opportunity cost of equity capital and has called the resulting income concept economic value added (EVA).
Measurement of Capital Services and Asset Price Changes
487
on the part of producers. Thus, the analysis presented in this chapter is only a start on the difficult problems associated with measuring capital input. 12.3 Cross-Section Depreciation Profiles Recall that in the previous section, P tn was defined to be the price of an asset that was n periods old at the beginning of period t. Generally, the decline in asset value as we go from one vintage to the next oldest is called depreciation. More precisely, we define the cross-section depreciation D tn15 of an asset that is n periods old at the beginning of period t as (10)
D tn P tn P tn1;
n 0, 1, 2, . . .
Thus, D tn is the value of an asset that is n periods old at the beginning of period t, P tn, minus the value of an asset that is n 1 periods old at the beginning of period t, P tn1. Obviously, given the sequence of period-t cross-section asset prices {P tn}, we can use equations (10) to determine the period-t sequence of declines in asset values by age, {D tn}. Conversely, given the period-t crosssection depreciation sequence or profile, {D nt }, we can determine the period-t asset prices by age n by adding up amounts of depreciation: (11)
P t0 D t0 D t1 D t2 . . . P t1 D t1 D t2 D t3 . . . ... P tn D tn D tn1 D tn2 . . .
Rather than working with first differences of asset prices by age, it is more convenient to reparameterize the pattern of cross-section depreciation by defining the period-t depreciation rate tn for an asset that is n periods old at the start of period t as follows: (12)
P tn1 D tn tn 1 ; P tn P tn
n 0, 1, 2, . . .
In the above definitions, we require n to be such that P tn is positive.16 Obviously, given the sequence of period-t asset prices by age n, {P tn}, we 15. This terminology is due to P. Hill (1999), who distinguished the decline in secondhand asset values due to aging (cross-section depreciation) from the decline in an asset value over a period of time (time series depreciation). Triplett (1996, pp. 98–99) uses the cross-section definition of depreciation (calling it deterioration) and shows that it is equal to the concept of capital consumption in the national accounts, but he does this under the assumption of no expected real asset price change. We will examine the relationship of cross-section to time series depreciation in section 12.5 below. 16. This definition of depreciation dates back to Hicks (1939, p. 176) at least and was used extensively by Edwards and Bell (1961, p. 175), Hulten and Wykoff (1981a,b), who call it deterioration, Diewert (1974, p. 504), and Hulten (1990, p. 128; 1996, p. 155).
488
W. Erwin Diewert
can use equations (12) to determine the period-t sequence of cross-section depreciation rates, {tn}. Conversely, given the cross-section sequence of period-t depreciation rates, {tn}, as well as the price of a new asset in period t, P t0 , we can determine the period-t asset prices by age as follows: (13)
P t1 (1 t0 )P t0 P t2 (1 t0 )(1 t1 )Pt0 ... P tn (1 t0 )(1 t1 ) . . . (1 t1 )P t0 , . . .
The interpretation of equations (13) is straightforward. At the beginning of period t, a new capital good is worth P t0. An asset of the same type but that is one period older at the beginning of period t is less valuable by the amount of depreciation t0P t0 and hence is worth (1 – t0 )P t0, which is equal to P t1. An asset that is two periods old at the beginning of period t is less valuable than a one-period-old asset by the amount of depreciation t1P t1 and hence is worth P t2 (1 – t1 )P t1, which is equal to (1 – t1 )(1 – t0 )P 0t using the first equation in equations (13), and so on. Suppose L – 1 is the first integer that is such that tL–1 is equal to one. Then P tn equals zero for all n L; that is, at the end of L periods of use, the asset no longer has a positive rental value. If L 1, then a new asset of this type delivers all of its services in the first period of use and the asset is in fact a nondurable asset. Now substitute equations (12) into equations (9) in order to obtain the following formulas for the sequence of the end-of-period-t user costs by age n, {u tn}, in terms of the price of a new asset at the beginning of period t, P t0, and the sequence of cross-section depreciation rates, {tn}: (14)
u t0 [(1 r t ) (1 i t )(1 t0 )] P t0 u t1 (1 t0 )[(1 r t ) (1 i t )(1 t1 )] P t0 u tn (1 t0 ) . . . (1 tn1 )[(1 r t ) (1 i t )(1 tn )] P t0 ; . . .
Thus, given P t0 (the beginning-of-period-t price of a new asset), i t (the nominal rate of new asset price change that is expected at the beginning of period t), r t (the one-period nominal interest rate that the business unit faces at the beginning of period t) and given the sequence of cross-section vintage depreciation rates prevailing at the beginning of period t (the tn ), then we can use equations (14) to calculate the sequence of the end-of-period user costs for period t, the u tn . Of course, given the u tn , we can use equations (8) to calculate the beginning-of-period user costs (the f tn ) and then use the f tn to calculate the sequence of asset prices by age P tn using equations (5), and finally, given the P tn, we can use equations (12) in order to calculate the sequence of depreciation rates for assets of age n at the beginning of period
Measurement of Capital Services and Asset Price Changes
489
t, the tn. Thus, given any one of these sequences or profiles, all of the other sequences are completely determined. This means that assumptions about depreciation rates, the pattern of user costs by age of asset, or the pattern of asset prices by age of asset cannot be made independently of each other.17 It is useful to look more closely at the first equation in equations (14), which expresses the user cost or rental price of a new asset at the end of period t, u t0, in terms of the depreciation rate t0, the one-period nominal interest rate r t, the new asset inflation rate i t that is expected to prevail at the beginning of period t, and the beginning-of-period-t price for a new asset, P t0. (15) u t0 [(1 r t ) (1 i t )(1 t0 )] P t0 [r t i t (1 i t )t0 ]P t0. Thus, the user cost of a new asset u t0 that is purchased at the beginning of period t (and the actual or imputed rental payment is made at the end of the period) is equal to r t – i t (a nominal interest rate minus an asset inflation rate that can be loosely interpreted18 as a real interest rate) times the initial asset cost P t0 plus (1 i t )t0 P t0, which is depreciation on the asset at beginning-of-period prices, t1P t0 , times one plus the expected rate of asset price change, (1 i t ).19 If we further assume that the expected rate of asset price change i t is 0, then equation (15) further simplifies to: (16)
u t0 [r t t0 ]P t0.
Under these assumptions, the user cost of a new asset is equal to the interest rate plus the depreciation rate times the initial purchase price.20 This is essentially the user cost formula that was obtained by Walras (1954, pp. 268–69) in 1874. However, the basic idea that a durable input should be charged a period 17. This point was first made explicitly by Jorgenson and Griliches (1967, p. 257); see also Jorgenson and Griliches (1972, pp. 81–87). Much of the above algebra for switching from one method of representing vintage capital inputs to another was first developed by Christensen and Jorgenson (1969, pp. 302–5; 1973) for the geometrically declining depreciation model. The general framework for an internally consistent treatment of capital services and capital stocks in a set of vintage accounts was set out by Jorgenson (1989) and Hulten (1990, pp. 127– 29; 1996, pp. 152–60). 18. We will provide a more precise definition of a real interest rate later. 19. This formula was obtained by Christensen and Jorgenson (1969, p. 302) for the geometric model of depreciation, but it is valid for any depreciation model. Griliches (1963, p. 120) also came very close to deriving this formula in words: “In a perfectly competitive world the annual rent of a machine would equal the marginal product of its services. The rent itself would be determined by the interest costs on the investment, the deterioration in the future productivity of the machine due to current use, and the expected change in the price of the machine (obsolescence).” 20. Using equations (13) and (14) and the assumption that the asset inflation rate i t 0, it can be shown that the user cost of an asset that is n periods old at the start of period t can be written as u tn (r t tn )P tn, where P tn is the beginning of period-t secondhand market price for the asset.
490
W. Erwin Diewert
price that is equal to a depreciation term plus a term that would cover the cost of financial capital goes back to Babbage (1835, p. 287) and others.21 Babbage did not proceed further with the user cost idea. Walras seems to have been the first economist who formalized the idea of a user cost into a mathematical formula. However, the early industrial engineering literature also independently came up with the user cost idea; Church (1901, pp. 734 and 907–8) in particular gave a very modern exposition of the ingredients needed to construct user costs or machine rents. Church was well aware of the importance of determining the “right” rate to be charged for the use of a machine in a multiproduct enterprise. This information is required not only to price products appropriately but to determine whether an enterprise should make or purchase a particular commodity. Babbage (1835, p. 203) and Canning (1929, pp. 259–60) were also aware of the importance of determining the right machine rate charge.22 The above equations relating asset prices by age n, P tn , beginning-ofperiod user costs by age n, f tn , end-of-period user costs, u tn , and the (crosssection) depreciation rates tn are the fundamental ones that we will specialize in subsequent sections in order to measure both wealth capital stocks and capital services under conditions of inflation. In the following section, we shall consider several options that could be used in order to determine empirically the interest rates r t and the expected asset rates of price change i t. 12.4 The Empirical Determination of Interest Rates and Rates of Asset Price Change We consider initially three broad approaches23 to the determination of the nominal interest rate r t that is to be used to discount future period value flows by the business units in the aggregate under consideration: 21. Solomons (1968, pp. 9–17) indicates that interest was regarded as a cost for a durable input in much of the nineteenth-century accounting literature. The influential book by Garcke and Fells (1893) changed this. 22. Under moderate inflation, the difficulties with traditional cost accounting based on historical cost, and no proper allowance for the opportunity of capital, the proper pricing of products becomes very difficult. Diewert and Fox (1999, pp. 271–74) argued that this factor contributed to the great productivity slowdown that started around 1973 and persisted to the early 1990s. The traditional method of cost accounting can be traced back to a book first published in 1887 by the English accountants Garcke and Fells (1893, pp. 70–71). Their rather crude approach to cost accounting should be compared to the masterful analysis of Church. Garcke and Fells (pp. 72–73) endorsed the idea that depreciation was an admissible item of cost that should be allocated in proportion to the prime cost (i.e., labor and materials cost) of manufacturing an article but they explicitly ruled out interest as a cost. The aversion of accountants to including interest as a cost can be traced back to the influence of Garcke and Fells. 23. Other methods for determining the appropriate interest rates that should be inserted into user cost formulas are discussed by Harper, Berndt, and Wood (1989) and in chapter 5 of Schreyer (2001). Harper, Berndt, and Wood evaluate empirically five alternative rental price formulas using geometric depreciation but making different assumptions about the interest
Measurement of Capital Services and Asset Price Changes
491
• Use the ex post rate of return that will just make the sum of the user costs exhaust the gross operating surplus of the production sectors for the aggregate under consideration. • Use an aggregate of nominal interest rates that the production sectors in the aggregate might be facing at the beginning of each period. • Take a fixed real interest rate and add to it actual ex post consumer price inflation or anticipated consumer price inflation. The first approach was used for the entire private production sector of the economy by Jorgenson and Griliches (1967, p. 267) and for various sectors of the economy by Christensen and Jorgenson (1969, p. 307). It is also widely used by statistical agencies. It has the advantage that the value of output for the sector will exactly equal the value of input in a consistent accounting framework. It has the disadvantages that it is subject to measurement error and it is an ex post rate of return, which may not reflect the economic conditions facing producers at the beginning of the period. This approach (incorrectly in our view) transforms pure profits (or losses) into a change in the opportunity cost of financial capital. The second approach suffers from aggregation problems. There are many interest rates in an economy at the beginning of an accounting period, and the problem of finding the right aggregate of these rates is not a trivial one. The third approach works as follows. Let the consumer price index for the economy at the beginning of period t be c t, say. Then the ex post general consumer inflation rate for period t is t, defined as (17)
c t1 1 t . ct
Let the production units under consideration face the real interest rate r ∗t. Then, by the Fisher (1896) effect, the relevant nominal interest rate that the producers face should be approximately equal to r t, defined as follows: (18)
r t (1 r ∗ t )(1 t ) 1.
The Australian Bureau of Statistics assumes that producers face a real interest rate of 4 percent. This is consistent with long-run observed economywide real rates of return for most Organization for Economic Cooperation and Development (OECD) countries, which fall in the 3 to 5 percent range. We shall choose this third method for defining nominal interest rates and choose the real rate of return to be 4 percent per annum; that is, we assume that the nominal rate r t is defined by equation (18) with the real rate defined by
rate and the treatment of asset price change. They show (as we will later) that the choice of formula matters.
492
W. Erwin Diewert
r ∗ t .04,
(19)
assuming that the accounting period chosen is a year.24 We turn now to the determination of the asset expected rates of price change,25 the i t, which appear in most of the formulas derived in the preceding sections of this chapter. There are three broad approaches that can be used in this context:
• Use actual ex post rates of price change for a new asset over each period.
• Assume that each asset rate of price change is equal to the general inflation rate for each period.
• Estimate anticipated rates of asset price change for each period. In what follows, we will compute cross-sectional user costs using Canadian data on investments for two broad classes of assets (nonresidential construction and machinery and equipment) for four different sets of assumptions about depreciation or the relative efficiency of assets by age. We will undertake these computations in an inflationary environment and make each of the three sets of assumptions about the asset inflation rates listed above for each of the four depreciation models, giving twelve models in all that will be compared. If the various models give very different results, this indicates that the statistical agency computing capital stocks and service flows under inflation must choose its preferred model with some care. When we assume that the rate of price change for each asset is equal to the general inflation rate t defined by equation (17), the equations presented earlier simplify. Thus, if we replace 1 i t with 1 t and 1 r t with (1 r ∗)(1 t ), equations (5), which relate the period-t asset prices by age nP tn to the rental prices f tn , become
1 1 1 f f f ∗ ∗ 1r 1r 1 r∗
2 3 1 1 1 (20) P t0 f t0 f t1 f t2 f t3 . . . ∗ ∗ ∗ 1r 1r 1r
P t1 f t1
2
t 2
3
t 3
t 4
...
...
2 3 1 1 1 P tn f tn f tn1 f tn2 f tn3 . . . ∗ ∗ ∗ 1r 1r 1r
Note that only the constant real interest rate r∗ appears in these equations. If we replace 1 i t with 1 t and 1 r t with (1 r ∗)(1 t ), equations 24. If we are in a high-inflation situation so that the accounting period becomes a quarter or a month, then r ∗t must be chosen to be appropriately smaller. 25. These are sometimes called revaluation terms in user cost formulas.
Measurement of Capital Services and Asset Price Changes
493
(14), which relate the end-of-period user costs u tn to the depreciation rates tn , become (21)
u t0 (1 t)[(1 r ∗) (1 t0 )]P t0 (1 t)[r ∗ t0 ]P t0 u t1 (1 t)(1 t0 )[(1 r ∗) (1 t1)]P t0 (1 t)(1 t0)[r ∗ t1]P t0 ... u (1 )(1 ) . . . (1 tn1)[(1 r ∗) (1 tn )]P t0 t n
t
t 0
(1 t)(1 t0 ) . . . (1 tn1)[r ∗ tn ]P t0. Now use equations (8) and 1 r t (1 r ∗)(1 t) and substitute into equations (21) to obtain the following equations, which relate the beginning-of-period user costs f tn to the depreciation rates tn : (22)
f t0 (1 r ∗)1[r ∗ t0 ]P t0 f t1 (1 r ∗)1(1 t0)[r ∗ t1 ]P t0 ... f (1 r ∗) (1 ) . . . (1 tn1)[r ∗ tn ]P t0. t n
1
t 0
Note that only the constant real interest rate r∗ appears in equations (22), but equations (21) also have the general inflation rate (1 t ) as a multiplicative factor. As mentioned above, in our third class of assumptions about rates of asset price change, we want to estimate anticipated rates of asset price change and use these estimates as our i t in the various formulas we have exhibited. Unfortunately, there are any number of forecasting methods that could be used to estimate the anticipated asset rates of price change. We will take a somewhat different approach than a pure forecasting one: we will simply smooth the observed ex post new asset rates of price change and use these smoothed rates as our estimates of anticipated rates.26 A similar forecasting problem arises when we use ex post actual consumer price index inflation rates (recall equations [17] and [18] above) in order to generate anticipated general inflation rates. Thus, in our third set of models, we will use both smoothed asset inflation rates and smoothed general inflation rates as our estimates for anticipated rates. In our first class of models, we will use actual ex post rates in both cases. Before we proceed to consider our four specific depreciation models, we 26. Unfortunately, different analysts may choose different smoothing methods, so there may be a problem of a lack of reproducibility in our estimating procedures. Harper, Berndt, and Wood (1989, p. 351) note that the use of time series techniques to smooth ex post asset inflation rates and the use of such estimates as anticipated price change dates back to Epstein (1977).
494
W. Erwin Diewert
briefly consider in the next section a topic of some current interest: namely, the interaction of (foreseen) obsolescence and depreciation. We also discuss cross-section versus time series depreciation. 12.5 Obsolescence and Depreciation We begin this section with a definition of the time series depreciation of an asset. Define the ex post time series depreciation of an asset that is n periods old at the beginning of period t, E nt , to be its secondhand market price at the beginning of period t, P tn , less the price of an asset that is one period t1 older at the beginning of period t 1, P n1 ; that is, (23)
E tn P tn P t1 n1;
n 0, 1, 2, . . .
Definition (23) should be contrasted with our earlier definitions in equation (10), which defined the cross-section amounts of depreciation for the same assets at the beginning of period t, D tn P tn – P tn1. We can now explain why we preferred to work with the cross-section definition of depreciation, equation (10), over the time series definition, equation (23). The problem with equation (23) is that time series depreciation captures the effects of changes in two things: changes in time (this is the change in t to t 1)27 and changes in the age of the asset (this is the change in n to n 1).28 Thus, time series depreciation aggregates together two effects: the asset-specific price change that occurred between time t and time t 1 (asset revaluation due to general inflation and asset-specific price change) and the effects of asset aging (depreciation). Thus, the time series definition of depreciation combines together two distinct effects. The foregoing definition of ex post time series depreciation is the original definition of depreciation, and it extends back to the very early beginnings of accounting theory.29 However, what has to be kept in mind that the early authors who used the concept of time series depreciation were implicitly or explicitly assuming that prices were stable across time, in which case, time series and crosssection depreciation coincide. P. Hill (2000, p. 6) and R. J. Hill and Hill (2003, p. 617)30 recently argued that a form of time series depreciation that included expected obsolescence was to be preferred over cross-section depreciation for national accounts purposes. Since the depreciation rates tn defined by equation (12) are cross27. This change could be captured by either P tn – P t1 or P tn1 – P t1 n n1. 28. This change could be captured by either P tn – P tn1 or P t1 – P t1 n n1. 29. See, for example, Matheson (1910, p. 35) and Hotelling (1925, p. 341). 30. We agree in general with P. Hill (2000) and R. J. Hill and Hill (2003) that expected obsolescence should be added to cross-sectional depreciation to form an overall depreciation charge. However, Hill and Hill assumed that there was no general inflation in their exposition, so some clarification is needed to deal with this complication.
Measurement of Capital Services and Asset Price Changes
495
section depreciation rates and they play a key role in the beginning and end of period-t user costs f tn and u tn defined by equations (14), (21), and (22), it is necessary to clarify their use in the context of P. Hill’s point that these depreciation rates should not be used to measure depreciation in the national accounts. Our response to the Hill critique is twofold:
• Cross-section depreciation rates as we have defined them are affected by anticipated obsolescence in principle, but
• Hill is correct in arguing that cross-section depreciation will not generally equal ex post time series depreciation or anticipated time series depreciation. Before discussing these two points in detail, it is necessary to discuss the concept of obsolescence in a bit more detail. Wykoff, in his discussion of this chapter (chap. 12 comment in this volume), takes a narrow “technological” definition of obsolescence. In his view, an asset can only become obsolete if a new model of the asset becomes available that can deliver at least the service flow of the old asset at a lower price. In his view, if there is no technological change embodied in the new asset, then by definition there is no obsolescence. However, it is possible to define obsolescence more broadly and include the effects of changes in the economy that reduce the demand for the asset’s services to such an extent that its real price falls.31 In what follows, we will use the second, broader concept of obsolescence. One more point must be considered at this point. If there is technological obsolescence due to a new and improved model of the asset being made available, then we assume that the price of the new model has been (somehow) quality adjusted so that the quality-adjusted price is measured in quantity units that are comparable to the older models. Now consider the first bulleted point. Provisionally, we define anticipated obsolescence as a situation where the expected new asset rate of price change (adjusted for quality change) i t is negative.32 For example, everyone 31. This broader definition goes back to Church (1917) at least: “Even though a machine is used fairly and uniformly as contemplated when the rate of depreciation was fixed there is another influence that may shorten its period of usefulness in an unexpected way. The progress of the technical art in which it is employed may develop more efficient machines for doing the same work, so that it becomes advisable to scrap it long before it is worn out. The machine becomes obsolete and the loss of value from this cause is called ‘obsolescence.’ Again, unless the machine is of a very generalized type, such as an engineer’s lathe, another type of misfortune may overtake it. If it is a machine that can only be used for certain definite kinds of work or some special article, as for example many of the machines used in automobile and bicycle manufacture, it may happen that changes in demand, or in style, make the manufacture of that special article no longer profitable. In this case, unless the machine can be transformed for another use, it is a dead loss” (192–93). 32. Paul Schreyer and Peter Hill noted a problem with this provisional definition of anticipated obsolescence as a negative value of the expected asset inflation rate: it will not work in a high-inflation environment. In a high-inflation environment, the nominal asset inflation rate
496
W. Erwin Diewert
anticipates that the quality-adjusted price for a new computer next quarter will be considerably lower than it is this quarter.33 Now turn back to equations (5), which define the profile of vintage asset prices P tn at the start of period t. It is clear that the negative i t plays a role in defining the sequence of vintage asset prices, as does the sequence of vintage rental prices that is observed at the beginning of period t, the f tn . Thus, in this sense, crosssectional depreciation rates certainly embody assumptions about anticipated obsolescence. Thus, for an asset that has a finite life, as we move down the rows of equations (5), the number of discounted rental terms declines, and hence asset value declines, which is Griliches’s (1963, p. 119) concept of exhaustion. If the cross-sectional rental prices are monotonically declining (due to their declining efficiency), then as we move down the rows of equations (5), the higher rental terms are being dropped one by one so that the asset values will also decline from this factor, which is Griliches’s concept of deterioration. Finally, a negative anticipated asset inflation rate will cause all future period rentals to be discounted more heavily, which could be interpreted as Griliches’s concept of obsolescence.34 Thus, all of these explanatory factors are embedded in equations (5). Now consider the second bulleted point: that cross-section depreciation is not really adequate to measure time series depreciation in some sense to be determined. Define the ex ante time series depreciation of an asset that is n periods old at the beginning of period t, tn , to be its secondhand market price at the beginning of period t, P tn , less the anticipated price of an asset that is one period older at the beginning of period t 1, (1 i t ) P tn1; i.e., (24)
tn P tn (1 i t )P tn1;
n 0, 1, 2, . . .
Thus, anticipated time series depreciation for an asset that is t periods old at the start of period t, tn , differs from the corresponding cross-section dei t will generally be positive, but we will require this nominal rate to be less than general inflation in order to have anticipated obsolescence. Thus, our final definition of anticipated obsolescence is that the real asset inflation rate i∗t defined later by equation (28) be negative; see the discussion immediately preceding equation (30). 33. Our analysis assumes that the various vintages of capital are adjusted for quality change (if any occurs) as they come on the market. In terms of our Canadian empirical example to follow, we are assuming that Statistics Canada correctly adjusted the published investment price deflators for machinery and equipment and nonresidential construction for quality change. We also need to assume that the form of quality change affects all future efficiency factors (i.e., the f nt ) in a proportional manner. This is obviously only a rough approximation to reality: technical change may increase the durability of a capital input, or it may decrease the amount of maintenance or fuel that is required to operate the asset. These changes can lead to nonproportional changes in the f nt . 34. However, it is more likely that what Griliches had in mind was Hill’s second point: that time series depreciation will be larger than cross-section depreciation in a situation where i ∗t is negative.
Measurement of Capital Services and Asset Price Changes
497
preciation defined by equation (10), D tn P tn – P tn1, in that the anticipated new asset rate of price change, i t, is missing from D tn. However, note that the two forms of depreciation will coincide if the expected asset rate of price change it is zero. We can use equations (12) and (13) in order to define the ex ante depreciation amounts tn in terms of the cross-section depreciation rates tn. Thus, using definitions (24), we have (25) tn P tn (1 i t )P tn1
n 0, 1, 2, . . .
P tn (1 i t )(1 tn )P tn [1 (1 i )(1 )]P t
t n
using equation (12)
t n
(1 t1)(1 t2 ) . . . (1 tn1) [1 (1 i )(1 )]P t
t n
using equation (13)
t 0
(1 t1)(1 t2 ) . . . (1 tn1)[tn i t (1 tn )]P t0 . We can compare the above sequence of ex ante time series depreciation amounts tn with the corresponding sequence of cross-section depreciation amounts: (26)
D tn P tn P tn1
n 0, 1, 2, . . .
P (1 )P t n
t n
t n
using equation (12)
[1 (1 tn )]P tn (1 t1)(1 t2 ) . . . (1 tn1)[tn ]P t0 using equations (13). Of course, if the anticipated rate of asset price change i t is zero, then equations (25) and (26) coincide, and ex ante time series depreciation equals cross-section depreciation. If we are in the provisional expected obsolescence case with i t negative, then it can be seen comparing equations (25) and (26) that tn D tn for all n such that D tn 0; that is, if i t is negative (and 0 tn 1), then ex ante time series depreciation exceeds cross-section depreciation over all in-use vintages of the asset. If i t is positive so that the rental price of each vintage is expected to rise in the future, then ex ante time series depreciation is less than the corresponding cross-section depreciation for all assets that have a positive price at the end of period t. This corresponds to the usual result in the vintage user cost literature where capital gains or an ex post price increase for a new asset leads to a negative term in the user cost formula (plus a revaluation of the cross-section depreciation rate). Here we are restricting ourselves to anticipated capital gains rather than the actual ex post capital gains, and we are focusing on depreciation concepts rather than the full user cost. This is not quite the end of the story in the high-inflation context. National income accountants often readjust asset values at either the beginning
498
W. Erwin Diewert
or end of the accounting period to take into account general price level change. At the same time, they also want to decompose nominal interest payments into a real interest component and another component that compensates lenders for general price change. Recall equation (17), which defined the general period-t inflation rate t, and equation (18), which related the period-t nominal interest rate r t to the real rate r ∗t and the inflation rate t. We rewrite equation (18) as follows: (27)
1 rt 1 r ∗ t t . 1
In a similar manner, we define the period-t anticipated rate of real asset price change i ∗t as follows: (28)
1 it 1 i ∗t t 1
Recall definition (23), which defined the ex ante time series depreciation of an asset that is n periods old at the beginning of period t, tn. The first term in this definition reflects the price level at the beginning of period t while the second term in this definition reflects the price level at the end of period t. We now express the second term in terms of the beginning-ofperiod-t price level. Thus, we define the ex ante real time series depreciation of an asset that is n periods old at the beginning of period t, tn , as follows: (29)
P tn1 tn P tn (1 i t ) t 1 P tn P tn (1 i t )(1 tn ) t 1
n 0, 1, 2, . . .
using equation (12)
P tn [(1 t) (1 i ∗ t)(1 t)(1 tn)] t using equation (28) 1 (1 t0)(1 t1) . . . (1 tn1) [1 (1 i ∗ t)(1 tn)]P t0
using equations (13)
(1 t0(1 t1) . . . (1 tn1)[tn i ∗ t (1 tn )] P 0t . The ex ante real time series depreciation amount tn defined by equations (29) can be compared to its cross-section counterpart D tn , defined by equations (25) above. Of course, if the real anticipated asset inflation rate i ∗t is zero, then equations (29) and (25) coincide, and real ex ante time series depreciation equals cross-section depreciation. We are now in a position to provide a more satisfactory definition of expected obsolescence, particularly in the context of high inflation. We now
Measurement of Capital Services and Asset Price Changes
499
define expected obsolescence to be the situation where the real rate of asset price change i ∗t is negative. If this real rate is negative, then it can be seen comparing equations (29) and (26) that (30)
tn D tn
for all n such that D tn 0;
that is real anticipated time series depreciation exceeds the corresponding cross-section depreciation provided that i ∗t is negative. Thus, the general user cost formulas that we have developed from the vintage accounts point of view can be reconciled to reflect the point of view of national income accountants. We agree with Hill’s point of view that cross-section depreciation is not really adequate to measure time series depreciation as national income accountants have defined it since Pigou (1935, pp. 240–41). Pigou in an earlier work (1924) had a more complete discussion of the obsolescence problem and the problems involved in defining time series depreciation in an inflationary environment. Pigou first pointed out that the national dividend or net annual income (or, in modern terms, real net output) should subtract depreciation or capital consumption (34–35). Pigou then went on to discuss the roles of obsolescence and general price change in measuring depreciation (39–41). Pigou was responsible for many of the conventions of national income accounting that persist down to the present day. He essentially argued that (unanticipated) capital gains or losses be excluded from income and that the effects of general price level change be excluded from estimates of depreciation. He also argued for the inclusion of (foreseen) obsolescence in depreciation. Unfortunately, he did not spell out exactly how all of this could be done in the accounts. Our preceding algebra can be regarded as an attempt to formalize these Pigovian complications. It should be noted that the early industrial engineering literature also stressed that the possibility of obsolescence meant that depreciation allowances should be larger than those implied by mere wear and tear; see Babbage (1835, p. 285), Matheson (1910, pp. 39–40) and Church (1917, pp. 192– 93). Both Matheson and Church noted that obsolescence could arise not only from new inventions but also from shifts in demand. We will end this section by pointing out another important use for the concept of real anticipated time series depreciation. However, before doing this, it is useful to rewrite equations (5), which define the beginning-ofperiod-t asset prices by age n, P tn , in terms of the beginning-of-period-t rental prices f tn , and equations (7), which define the user costs f tn in terms of the asset prices P nt , using definitions (27) and (28), which define the period-t real interest rate r ∗t and expected asset inflation rate i ∗t, respectively, in terms of the corresponding nominal rates r t and i t and the general inflation rate t. Substituting equations (27) and (28) into equations (5) yields the following system of equations:
500
(31)
W. Erwin Diewert
1 i∗t t 1 r∗
1 i ∗t t 1 r∗
1 i ∗t t 1 r∗
f . . . 1 i∗ 1 i∗ 1 i∗ P f f f f . . . ∗ ∗ 1r 1r 1 r∗ P t0 f t0
f t1
t
t 1
t 1
t
t 2
t
t
2
2
f t2
t
t 3
t
3
3
t 3
t 4
... P tn f tn
1 i∗ t 1 r∗
1 i ∗t t 1 r∗
t
f 3
f tn1
t n3
1 i ∗t t 1 r∗
f 2
t n2
. . .
Similarly, substituting equations (27) and (28) into equations (7) yields the following system of equations: 1 i ∗t (32) f t0 P t0 t P t1 (1 r ∗t)1[P t0(1 r ∗t ) (1 i ∗t )P t1 ] 1 r∗
1 i∗ P P (1 r ∗ ) 1 r∗ t
f t1
t 1
[P t1(1 r ∗t ) (1 i ∗t )P t2 ]
t 1
t 2
t
... 1 i∗ f tn P tn t P tn1 (1 r ∗t)1[P tn(1 r ∗t) 1 r∗
t
(1 i ∗t) P tn1 ]; . . . Note that the nominal interest and inflation rates have entirely disappeared from the equations. In particular, the beginning-of-period user costs f tn can be defined in terms or real variables using equations (32) if this is desired. On the other hand, entirely equivalent formulas for the cross-section user costs can be obtained using the initial set of equations (7), which used only nominal variables. Which set of equations is used in practice can be left up to the judgment of the statistical agency or the user.35 The point is that the careful and consistent use of discounting should eliminate the effects of general inflation from our price variables; discounting makes comparable cash flows received or paid out at different points of time. Recall definition (29), which defined tn as the ex ante real time series depreciation of an asset that is n periods old at the beginning of period t. It is convenient to convert this amount of depreciation into a percentage of the 35. In particular, it is not necessary for the statistical agency to convert all nominal prices into real prices as a preliminary step before “real” user costs are calculated. The above algebra shows that our nominal user costs f nt can also be interpreted as real user costs that are expressed in terms of the value of money prevailing at the beginning of period t.
Measurement of Capital Services and Asset Price Changes
501
initial price of the asset at the beginning of period t, P tn . Thus, we define the ex ante time series depreciation rate for an asset that is n periods old at the start of period t, tn, as follows:36 (33)
t
tn n ; P tn
n 0, 1, 2, . . .
P tn (1 i t )P tn1 /(1 + t) P tn
using equations (29)
P tn (1 i t )(1 tn )P tn /(1 t) P tn
using equation (12)
[1 (1 i ∗ t)(1 tn )]
using equation (28).
Now substitute definition (12) for the cross-section depreciation rate tn into the nth equation of equations (32) and we obtain the following expression for the beginning-of-period-t user cost of an asset that is n periods old at the start of period t: (34) f tn (1 r ∗t )1[P tn(1 r ∗t ) (1 i ∗ t )P tn1]
n 0, 1, 2, . . .
(1 r ∗t )1[P tn (1 r ∗ t ) (1 i ∗t)(1 tn )P tn ] using equation (12) (1 r ∗t)1[(1 r ∗t) (1 i ∗t)(1 tn )]P tn (1 r ∗t)1 [r ∗t tn ]Ptn
using equations (33).
Thus, the period-t vintage user cost for an asset that is n periods old at the start of period t, f tn , can be decomposed into the sum of two terms. Ignoring the discount factor, (1 r ∗t )–1, the first term is r ∗tP tn , which represents the real interest cost of the financial capital that is tied up in the asset, and the second term is tn – P tn tn , which represents a concept of national accounts depreciation. The last line of equations (34) is important if at some stage statistical agencies decide to switch from measures of gross domestic product to measures of net domestic product. If this change occurs, then the user cost for each age n of capital, f tn , must be split up into two terms as in equations (34). The first term, (1 r ∗t)–1r ∗tP tn times the number of units of that type 36. To see that there can be a very large difference between the cross-section depreciation rate tn and the corresponding ex ante time series depreciation rate tn, consider the case of an asset whose vintages yield exactly the same service for each period in perpetuity. In this case, all of the vintage asset prices P tn would be identical and the cross-section depreciation rates tn would all be zero. Now suppose a marvelous new invention is scheduled to come on the market next period that would effectively drive the price of this class of assets down to zero. In this case, i ∗t would be –1 and substituting this expected measure of price change into definitions (33) shows that the ex ante time series depreciation rates would all equal one; that is, under these conditions, we would have tn 1 and tn 0 for all vintages n.
502
W. Erwin Diewert
of capital in use, could remain as a primary input charge, while the second term, (1 r ∗t )–1 tn P tn times the number of units of that age of capital in use (this is real national accounts depreciation), could be treated as an intermediate input charge (similar to the present treatment of imports). The second term would be an offset to gross investment.37 This completes our discussion of the obsolescence problem.38 In the next section, we turn our attention to the problem of aggregating across ages of the same capital good. 12.6 Aggregation over Vintages of a Capital Good In previous sections, we have discussed the beginning-of-period-t stock price P tn of an asset that is n periods old and the corresponding beginningand end-of-period user costs, f tn and u tn . The stock prices are relevant for the construction of real wealth measures of capital, and the user costs are relevant for the construction of capital services measures. We now address the problems involved in obtaining quantity series that will match up with these prices. Let the period-t – 1 investment in a homogeneous asset for the sector of the economy under consideration be I t–1. We assume that the starting capital stock for a new unit of capital stock at the beginning of period t is K t0 and this stock is equal to the new investment in the asset in the previous period; that is, we assume (35)
K t0 I t1.
Essentially, we are assuming that the length of the period is short enough so that we can neglect any contribution of investment to current production; a new capital good becomes productive only in the period immediately following its construction. In a similar manner, we assume that the capital stock available of an asset that is n periods old at the start of period t is K tn and this stock is equal to the gross investment in this asset class during period t – n – 1; that is, we assume (36)
K tn I tn1;
n 0, 1, 2, . . .
Given these definitions, the value of the capital stock in the given asset class for the sector of the economy under consideration (the wealth capital stock) at the start of period t is 37. Using this methodology, we would say that capital is being maintained intact for the economy if the value of gross investments made during the period (discounted to the beginning of the period) is equal to or greater than the sum of the real national accounts depreciation terms over all assets. This is a maintenance of financial capital concept as opposed to Pigou’s (1935, p. 235) maintenance of physical capital concept. 38. It should be noted that our discussion of the obsolescence issue only provides an introduction to the many thorny issues that make this area of inquiry quite controversial. For fur-
Measurement of Capital Services and Asset Price Changes
503
W t P t0 K t0 P t1K t1 P t2 K t2 . . .
(37)
P t0 I t1 P t1 I t2 P t2 I t3 . . .
using equation (36).
Turning now to the capital services quantity, we assume that the quantity of services that an asset of a particular age at a point in time is proportional (or more precisely, is equal) to the corresponding stock. Thus, we assume that the quantity of services provided in period t by a unit of the capital stock that is n periods old at the start of period t is K tn defined by equation (36) above. Given these definitions, the value of capital services for all vintages of asset in the given asset class for the sector of the economy under consideration (the productive services capital stock) during period t using the end-of-period user costs u tn defined by equations (8) is S t u t0 K t0 u t1 K t1 u t2 K t2 . . .
(38)
u t0 I t1 u t1 I t2 u t2 I t3 . . .
using equation (36).
Now we are faced with the problem of decomposing the value aggregates W t and S t defined by equations (37) and (38) into separate price and quantity components. If we assume that each new unit of capital lasts only a finite number of periods—L, say—then we can solve this value decomposition problem using normal index number theory. Thus, define the period-t stock price and quantity vectors, P t and K t respectively, as follows: t (39) P t [P t0 , P t1, . . . , P L1 ];
K [K , K , . . . , K t
t 0
t 1
t L1
] [I
t 0, 1, . . . , T; t1
,I
t2
, . . . , I tL1].
Fixed-base or chain indexes may be used to decompose value ratios into price change and quantity change components. In the empirical work that follows, we have used the chain principle.39 Thus, the value of the capital stock in period t, W t, relative to its value in the preceding period, W t–1, has the following index number decomposition: (40)
Wt P(P t1, P t, Kt1, Kt)Q(P t1, P t, Kt1, Kt); W t1
t 1, 2, . . . , T,
where P and Q are bilateral price and quantity indexes, respectively. In a similar manner, we define the period-t end-of-period user cost price and quantity vectors, u t and K t, respectively, as follows:
ther discussion, see Oulton (1995), Scott (1995), and Triplett (1996) and the references in these papers. 39. Given smoothly trending price and quantity data, the use of chain indexes will tend to reduce the differences between Paasche and Laspeyres indexes compared to the corresponding fixed-base indexes, and so chain indexes are generally preferred; see Diewert (1978, p. 895) for a discussion.
504
(41)
W. Erwin Diewert
ut [ut0, u t1, . . . , u tL1 ]; K [K , K , . . . , K t
t 0
t 1
t 0, 1, . . . , T; ] [I
t L1
t1
, I t2, . . . , I tL1].
We ask that the value of capital services in period t, S t, relative to its value in the preceding period, S t–1, have the following index number decomposition: (42)
St P(u t1, u t, Kt1, Kt )Q(u t1, u t, Kt1, Kt ); S t1
t 1, 2, . . . , T,
where again P and Q are bilateral price and quantity indexes, respectively. There is now the problem of choosing the functional form for either the price index P or the quantity index Q.40 In the empirical work that follows, we used the Fisher (1922) Ideal price and quantity indexes. These indexes appear to be best from the axiomatic viewpoint41 and can also be given strong economic justifications.42 It should be noted that our use of an index number formula to aggregate both stocks and services by age is more general than the usual aggregation procedures, which essentially assume that the different vintages of the same capital good are perfectly substitutable so that linear aggregation techniques can be used.43 However, as we shall see in subsequent sections, the more general mode of aggregation suggested here frequently reduces to the traditional linear method of aggregation provided that the period prices by age all move in strict proportion over time. Many researchers and statistical agencies relax the assumption that an asset lasts only a fixed number of periods (L, say) and make assumptions about the distribution of retirements around the average service life, L. In our empirical work that follows, for simplicity, we will stick to the suddendeath assumption: that all assets in the given asset class are retired at age L. However, this simultaneous retirement assumption can readily be relaxed (at the cost of much additional computational complexity) using a methodology developed by Hulten (1990, p. 125), where he subdivided a vintage into subcomponents, each of which had a different expected length of life. We now have all of the pieces that are required in order to decompose the capital stock of an asset class and the corresponding capital services into price and quantity components. However, in order to construct price and quantity components for capital services, we need information on the rel40. Obviously, given one of these functional forms, we may use equation (40) to determine the other. 41. See Diewert (1992a, pp. 214–23). 42. See Diewert (1976, pp. 129–34). 43. This more general form of aggregation was first suggested by Diewert and Lawrence (2000). For descriptions of the more traditional linear method of aggregation, see Jorgenson (1989, p. 4) or Hulten (1990, pp. 121–27; 1996, pp. 152–65).
Measurement of Capital Services and Asset Price Changes
505
ative efficiencies f tn of the various vintages of the capital input, or, equivalently, we need information on cross-sectional vintage depreciation rates tn in order to use equation (42) above. The problem is that we do not have accurate information on either of these series, so in what follows we will assume a standard asset life L and make additional assumptions on either the pattern of vintage efficiencies or depreciation rates. Thus, in a sense, we are following the same somewhat mechanical strategy that was used by the early cost accountants like Daniels (1933, p. 303). However, our mechanical strategy is more complex than that used by early accountants in that we translate assumptions about the pattern of cross-section depreciation rates into implications for the pattern of crosssection rental prices and asset prices, taking into account the complications induced by discounting and expected future asset price changes. In the following sections, we will consider four different sets of assumptions and calculate the resulting aggregate capital stocks and services using Canadian data. We illustrate how the various depreciation models differ from each other using annual Canadian data on two broad classes of asset:44
• machinery and equipment • nonresidential structures We use Canadian data on gross investment in these two asset classes (in current and in constant dollars) because it extends back to 1926 and hence capital stocks can be formed without making arbitrary starting value assumptions. Our first problem is to decide on the average age of retirement for each of these asset classes. One source is the OECD (1993), where average service lives for various asset classes were reported for fourteen OECD countries. For machinery and equipment (excluding vehicles) used in manufacturing activities, the average life ranged from eleven years for Japan to twenty-six years for the United Kingdom. For vehicles, the average service lives for passenger cars ranged from two years in Sweden to fourteen years in Iceland, and for road freight vehicles the average life ranged from three years in Sweden to fourteen years in Iceland. For buildings and structures, the average service lives ranged from fifteen years (for petroleum and gas structures in the United States) to eighty years for railway structures in Sweden. Faced with this wide range of possible lives, we decided to follow the example of Angus Maddison (1993) and assume an average service life of fourteen years for machinery and equipment and thirty-nine years for nonresidential structures. The Canadian data that we used may be found in Diewert (2004). We turn now to our first efficiency and depreciation model. 44. More accurate models would work with more disaggregated investment series.
506
W. Erwin Diewert
12.7 The One-Hoss-Shay Model of Efficiency and Depreciation In section 12.2, we noted that Böhm-Bawerk (1891, p. 342) postulated that an asset would yield a constant level of services throughout its useful life of L years and then collapse in a heap to yield no services thereafter. This has come to be known as the one-hoss-shay or lightbulb model of depreciation. Hulten (1990, p. 124) noted that this pattern of relative efficiencies has considerable intuitive appeal for many assets. The basic assumptions of this model are that the period-t efficiencies and hence cross-sectional rental prices f tn are all equal to, say, f t for ages n that are less than L periods old, and for older ages the efficiencies fall to zero. Thus, we have f tn f t for n 0, 1, 2, . . . , L 1;
(43)
0
for n L, L 1, L 2, . . .
Now substitute equations (43) into the first equation in equations (5) and get the following formula45 for the rental price f t in terms of the price of a new asset at the beginning of year t, P t0 . P t0 f t , 1 ( t) ( t)2 . . . ( L)L1
(44)
where the period-t discount factor t is defined in terms of the period-t nominal interest rate r t and the period-t expected asset rate of price change i t as follows: 1 it t t . 1r
(45)
Now that the period-t rental price f t for an unretired asset has been determined, substitute equations (43) into equations (5) and determine the sequence of period-t asset prices by age n, P tn . (46)P tn f t[1 (t) (t)2 . . . ( L)L1n] 0
for n 0, 1, 2, . . . , L 1 for n L, L 1, L 2, . . .
Finally, use equations (8) to determine the end-of-period-t rental prices, u tn, in terms of the corresponding beginning-of-period-t rental prices, f tn. (47)
u tn (1 r t )f tn ;
n 0, 1, 2, . . .
45. This formula simplifies to P t0 [1 – ( t ) L ] / [1 – t ] provided that t is less than 1 in magnitude. This last restriction does not hold for our Canadian data since, for some years, i t exceeds r t. However, equation (44) is still valid under these conditions.
Measurement of Capital Services and Asset Price Changes
507
Given the asset prices defined by equations (46), we could use equations (12) above to determine the corresponding cross-section depreciation rates tn. We will not table these depreciation rates since our focus is on constructing measures of the capital stock and of the flow of services that the stocks yield. We have data in current and constant dollars for investment in nonresidential structures and for machinery and equipment in Canada for the years 1926 to 1999 inclusive; see Diewert (2004) for a description of these data. As was mentioned in the previous section, we follow the example set by Maddison (1993) and assume an average service life of fourteen years for machinery and equipment and thirty-nine years for nonresidential structures. Thus, 1965 is the first year for which we will have data on all thirty-nine types of nonresidential structures. Now it is a straightforward matter to use the asset prices by age defined by equations (46) above (where L equals 39) and apply equation (40) in the previous section to aggregate over the thirty-nine types of nonresidential capital using the Fisher (1922) Ideal index number formula and form aggregate price and quantity series for the nonresidential construction (wealth) capital stock, P tNR and K tNR, for the years 1965–99. These series, along with their annual average (geometric) growth rates, can be found in Diewert at five-year intervals. Similarly, we use equations (46) above (where L equals 14) and apply equation (40) in the previous section to aggregate over the fourteen ages of machinery and equipment using the Fisher Ideal index number formula and form aggregate price and quantity series for the machinery and equipment (wealth) capital stock, P tME and K tME, for the years 1965–99. These series, along with their annual average (geometric) growth rates, can also be found in Diewert at five-year intervals. In this first model, we assume that producers exactly anticipate the asset rates of price change, i tNR and i tME, for nonresidential construction and for machinery and equipment respectively; these ex post rates of price change are listed in Diewert. Having constructed the aggregate price and quantity of nonresidential capital, P tNR and K tNR respectively, and the aggregate price and quantity of machinery and equipment, P tME and K tME respectively, we may again use the Fisher Ideal formula and aggregate these two series into a single aggregate price and quantity series for the wealth stock, which we denote by P(1) t and K(1) t, where the 1 indicates that this is our first model in a grand total of twelve alternative aggregate capital stock models. Using equations (43), (44), and (47), along with the data tabled in Diewert (2004), we can construct the end-of-period user costs for each of our thirty-nine types of nonresidential construction capital. Now use equation (38) to construct the service flow aggregate for nonresidential construction for each year. Then we use equation (42) in the previous section (where L equals 39) to aggregate over the thirty-nine types of nonresidential capital
508
W. Erwin Diewert
using the Fisher Ideal index number formula and form the aggregate rental price for nonresidential construction, u tNR, and the corresponding services aggregate, k tNR, for the years 1965–99.46 These series, along with their annual average (geometric) growth rates, can be found in Diewert (2004) at five-year intervals. Similarly, we use equation (42) above (where L equals 14) and aggregate over the fourteen ages of machinery and equipment using the Fisher Ideal index number formula and form aggregate capital services price and quantity series, u tME and k tME , for the years 1965–99. These series, along with their annual average (geometric) growth rates, can also be found in Diewert at five-year intervals. Having constructed the aggregate price and quantity of nonresidential capital services, u tNR and k tNR respectively, and the aggregate price and quantity of machinery and equipment services, u tME and k tME respectively, we may again use the Fisher Ideal formula and aggregate these two series into a single aggregate price and quantity series for capital services, which we denote by u(1) t and k(1) t, where the 1 again indicates that this is our first model in a grand total of twelve alternative aggregate capital stock models. The various data series will be compared graphically in section 12.11 below. We turn now to our second one-hoss-shay depreciation model. In this model, instead of assuming that producers correctly anticipate each year’s ex post asset inflation rates, we assume that producers use the current consumer price index (CPI) inflation rate as estimators of anticipated asset inflation rates. This model turns out to be equivalent to the constant real interest rate model that is frequently used by statistical agencies.47 In terms of computations, we simply replace the two ex post asset inflation rates, i tNR and i tME, with the CPI inflation rate t listed in Diewert (2004) and then repeat all of the computations made to implement model 1 above. When we compare the service prices and quantities in model 1, the perfect-foresight model, with the corresponding service prices and quantities in model 2, the constant real interest rate model, a number of things stand out:
• • • •
The model 2 user costs are much less volatile (as could be expected). The model 1 user costs grow much more quickly. The model 2 levels of capital services are much higher. The model 1 and 2 average growth rates for capital services are very similar.
46. Since all of the vintage rental prices are equal, it turns out that the aggregate rental price is equal to this common vintage rental price and the service aggregate is equal to the simple sum over the vintages. This result is an application of Hicks’s (1939, pp. 312–13) aggregation theorem; that is, if all prices in the aggregate move in strict proportion over time, then any one of these prices can be taken as the price of the aggregate. 47. The nominal interest rate is still used in forming the end-of-period user costs; otherwise, only real interest rates are used in this model.
Measurement of Capital Services and Asset Price Changes
509
Thus, the two models give very different results overall. The average rate of price increase for the model 2 capital services aggregate was 3.29 percent per year, which is much lower than the model 1 estimate of 4.85 percent per year. On the quantity side, the model 2 flow of nonresidential construction capital services increased from $2,727 million to $11,564 million (constant 1965) Canadian dollars, for an annual average (geometric) growth rate of 4.34 percent, while the model 2 flow of machinery and equipment capital services increased from $3,588 million to $34,556 million (constant 1965) Canadian dollars, for an annual average growth rate of 6.89 percent. The model 2 capital services aggregate grew at an annual average growth rate of 5.49 percent compared to the model 1 capital services annual average growth rate of 5.61 percent. We turn now to our third one-hoss-shay depreciation model. In this model (model 3), instead of assuming that producers correctly anticipate each year’s ex post asset inflation rates, we assume that they can anticipate the trends in asset inflation rates. In Diewert (2004), we describe in detail how these trends were determined. In terms of computations, we use exactly the same program that we used to implement model 1 except that we replace the rather volatile nominal interest rates r t with the smoothed nominal interest rates that are listed in Diewert. We also replace the two ex post asset inflat t tion rates, i NR and i ME , with their smoothed counterparts listed in Diewert. Comparing the numbers across the three models, there are some small differences between the capital stocks generated by our three variants of the one-hoss-shay model of depreciation, but the average growth rates are virtually identical. There is more variation across the three models in the movement of the stock prices, with model 1 giving the highest rate of price growth for the capital aggregate (4.35 percent per year), followed by model 3 (4.17 percent per year) and then model 2 (3.97 percent per year). The model 1, 2, and 3 aggregate prices, P(1)–P(3), and quantities of capital, K(1)–K(3) respectively, are graphed in figures 12.1–12.6 in section 12.11. The tremendous volatility of the model 1 rental prices, u(1), will become evident from viewing figure 12.7 in section 12.11. Thus, the use of ex post asset inflation rates as ex ante or anticipated inflation rates leads to user costs that are extremely volatile. The model 3 aggregate user costs, u(3), while still more volatile than the constant real interest rate user costs, u(2), are reasonable and smooth out the fluctuations in the u(1) series. The u(2) series lies below the other two user cost series because the constant real interest rate user costs make no allowance for the extra depreciation that arises from the anticipated price declines that are due to obsolescence; that is, the u(2) series ignores the systematic real price declines in the price of machinery and equipment. Thus, while model 2 is acceptable, we prefer model 3, since this model includes the effects of anticipated obsolescence, whereas model 2 does not.
510
W. Erwin Diewert
Examination of figures 12.4–12.6 in section 12.11 will show that all three one-hoss-shay models give rise to much the same aggregate capital stocks. The constant real interest rate capital stocks K(2) are the biggest, followed by the smoothed anticipated inflation stocks K(3), and the fully anticipated inflation stocks K(1) are the smallest. The aggregate capital services graphed in figures 12.10–12.12 show much the same pattern but with more dispersion. The constant real interest rate aggregated capital services k(2) are the biggest, followed by the smoothed anticipated inflation capital services k(3) and the fully anticipated inflation capital services k(1) are the smallest. We turn now to our second model of depreciation and efficiency. 12.8 The Straight-Line Depreciation Model The straight-line method of depreciation is very simple in a world without price change: one simply makes an estimate of the most probable length of life for a new asset, L periods, say, and then the original purchase price P t0 is divided by L to yield an estimate of period-by-period depreciation for the next L periods. In a way, this is the simplest possible model of depreciation, just as the one-hoss-shay model was the simplest possible model of efficiency decline.48 The use of straight-line depreciation dates back to the 1800s at least; see Matheson (1910, p. 55), Garcke and Fells (1893, p. 98), and Canning (1929, pp. 265–66). We now set out the equations that describe the straight-line model of depreciation in the general case when the anticipated asset rate of price change i t is nonzero. Assuming that the asset has a life of L periods and that the cross-sectional amounts of depreciation D tn P tn – P tn1 defined by equation (10) above are all equal for the assets in use, then it can be seen that the beginning of period-t vintage asset prices P tn will decline linearly for L periods and then remain at zero; that is, the P tn will satisfy the following restrictions: (48)
Ln P tn P t0 L 0
n 0, 1, 2, . . . , L n L 1, L 2, . . .
Recall definition (12) above, which defined the cross-sectional depreciation rate for an asset that is n periods old at the beginning of period t, tn. Using equations (48) and the nth equation in equations (13), we have 48. In fact, it can be verified that if the nominal interest rate r t and the nominal asset inflation rate i t are both zero, then the one-hoss-shay efficiency model will be entirely equivalent to the straight-line depreciation model.
Measurement of Capital Services and Asset Price Changes
(49)
n P tn (1 t0)(1 t1) . . . (1 tn1) t 1 P0 L
511
for n 1, 2, . . . , L.
Using equation (49) for n and n 1, it can be shown that (50)
L (n 1) (1 tn) Ln
n 0, 1, 2, . . . , L 1.
Now substitute equations (49) and (50) into the general user cost formula, equations (14), in order to obtain the period-t end-of-period straightline user costs, u tn:49 (51)
u tn (1 t0) . . . (1 tn1)[(1 r t) (1 i t)(1 tn )]P t0 n 0, 1, 2, . . . , L 1
n 1 L
L (n 1) (1 r t) (1 i t) Ln
P . t 0
Equations (48) give us the sequence of asset prices by age that are required to calculate the wealth capital stock, while equations (51) give us the user costs by age that are required to calculate capital services for the asset. It should be noted that if the anticipated asset inflation rate i t is large enough compared to the nominal interest rate r t, then the user cost u tn can be negative. This means that the corresponding asset becomes an output rather than an input for period t.50 At this point, we can proceed in much the same manner as in the previous section. We use the asset prices defined by equations (48) above (where L equals 39) and apply equation (40) in section 12.7 to aggregate over the thirty-nine types of nonresidential capital using the Fisher Ideal index number formula, and we form aggregate price and quantity series for the nonresidential construction (wealth) capital stock, P tNR and K tNR, for the years 1965–99. These series, along with their annual average (geometric) growth rates, can be found in Diewert (2004) at five-year intervals. Similarly, we use equations (48) above (where L equals 14) and apply equation (40) to aggregate over the fourteen types of machinery and equipment using the Fisher Ideal index number formula, and we form aggregate price and quantity series for the machinery and equipment (wealth) capital t t stock, P ME and K ME , for the years 1965–99. These series, along with their annual average (geometric) growth rates, can also be found in Diewert at fiveyear intervals. In this fourth model, we assume that producers exactly anticipate the ex post asset rates of price change, i tNR and i tME, for nonresidential construction and for machinery and equipment, respectively. Having 49. The user costs for n L, L 1, L 2, . . . are all zero. 50. However, one is led to wonder if the model is reasonable if some vintages of the asset have negative user costs while other vintages have positive ones.
512
W. Erwin Diewert
constructed the aggregate price and quantity of nonresidential capital, P tNR and K tNR respectively, and the aggregate price and quantity of machinery and equipment, P tME and K tME respectively, we may again use the Fisher Ideal formula and aggregate these two series into a single aggregate price and quantity series for the wealth stock, which we denote by P (4) t and K(4) t. Using equations (51) along with the data tabled in Diewert (2004), we can construct the end-of-period user costs for each of our thirty-nine types of nonresidential construction capital. Now use equation (38) to construct the service flow aggregate for nonresidential construction for each year. Then we use equation (42) in the previous section (where L equals 39) to aggregate over the thirty-nine types of nonresidential capital using the Fisher Ideal index number formula and form the aggregate rental price for nonresidential construction, u tNR, and the corresponding services aggregate, k tNR, for the years 1965–99.51 These series, along with their annual average (geometric) growth rates, can be found in Diewert at five-year intervals. Similarly, we use (42) above (where L equals 14) and aggregate over the fourteen types of machinery and equipment using the Fisher Ideal index number formula, and we form aggregate capital services price and quantity series, u tME and k tME, for the years 1965–99. These series, along with their annual average (geometric) growth rates, can also be found in Diewert at five-year intervals. Having constructed the aggregate price and quantity of nonresidential capital services, u tNR and k tNR respectively, and the aggregate price and quantity of machinery and equipment services, u tME and k tME respectively, we may again use the Fisher Ideal formula and aggregate these two series into a single aggregate price and quantity series for capital services, which we denote by u (4) t and k(4) t. We turn now to our second straight-line depreciation model. In this model 5, instead of assuming that producers correctly anticipate each year’s ex post asset inflation rates, we assume that producers use the current CPI inflation rate as estimators of anticipated asset rates of price change. In terms of computations, we simply replace the two ex post asset rates of price change, i tNR and i tME , by the CPI inflation rate t listed in Diewert (2004) and then repeat all of the computations made to implement model 4 above. It turns out that the model 5 constant real interest rate capital stocks (and prices) are exactly equal to their model 4 counterparts. This follows from equations (48), which describe the pattern of asset prices by age: in 51. It turned out that some of our rental prices were negative. This may not be a major theoretical problem since in this case the corresponding capital input becomes a net output. However, the computations were carried out using the econometrics computer program SHAZAM, and the index number option fails when any price is negative. In this case, it was necessary to write up a subroutine that would compute the Fisher indexes when some prices were negative. The four inner products that are building blocks into the Fisher indexes must all be positive in order to take the positive square root. This condition was satisfied by the data in all cases.
Measurement of Capital Services and Asset Price Changes
513
both models 4 and 5 (and model 6, to be considered shortly), these asset prices do not depend on r t or i t, and hence the resulting asset prices and capital stocks will be identical. Hence, there is no need to table the capital stocks and prices for model 5. However, the model 5 user costs and capital service flows by age (listed in Diewert 2004 at five-year intervals) are very different from their model 4 counterparts. We turn now to our third straight-line depreciation model, which we call model 6. In this model, instead of assuming that producers correctly anticipate each year’s ex post asset inflation rates, we assume that they can anticipate the trends in asset rates of price change. In terms of computations, we use exactly the same program that we used to implement model 4 except that we replace the rather volatile nominal interest rate r t that is listed in Diewert (2004) with the smoothed nominal interest rate that is listed in Diewert. We also replace the two ex post asset inflation rates, i tNR and i tME , with their smoothed counterparts, also listed in Diewert. As mentioned earlier, the model 6 constant real interest rate capital stocks (and prices) are exactly equal to their model 4 counterparts in table 12.7. Hence, there is no need to table the capital stocks and prices for model 6. However, the model 6 vintage user costs and capital service flows are very different from their model 4 and 5 counterparts. On the quantity side, model 6 gives much the same results as the other two straight-line depreciation models, models 4 and 5; see figures 12.10– 12.12 in section 12.11 for graphs of k(4)–k(6). In particular, the average annual (geometric) rate of growth of aggregate capital services for models 4, 5, and 6 was 5.30 percent, 5.08 percent, and 5.24 percent per year, respectively. However, on the user cost side, the three models give very different results. The perfect-foresight model, model 4, gave the highest annual average growth rate for the aggregate price of capital services, 4.96 percent per year, while the constant real interest rate model, model 5, gave the lowest average growth rate, 3.61 percent per year. The smoothed anticipated prices model, model 6, gave an intermediate growth rate for the price of capital services, 4.31 percent per year. As will be seen from figures 12.7– 12.9, the model 5 and 6 aggregate user costs were much smoother than the volatile model 4 user costs. We turn now to our third class of depreciation and efficiency models. 12.9 The Declining Balance or Geometric Depreciation Model The declining balance method of depreciation dates back to Matheson (1910, p. 55) at least.52 In terms of the algebra presented in section 12.3, the 52. Matheson (1910, p. 91) used the term “diminishing value” to describe the method. Hotelling (1925, p. 350) used the term “the reducing balance method,” while Canning (1929, p. 276) used the term the “declining balance formula.”
514
W. Erwin Diewert
method is very simple: all of the cross-sectional vintage depreciation rates tn defined by equation (12) are assumed to be equal to the same rate , where is a positive number less than one; that is, we have for all time periods t tn
(52)
n 0, 1, 2, . . . .
Substitution of equation (52) into equations (14) leads to the following formula for the sequence of period-t vintage user costs: (53)
u tn (1 ) n1 [(1 r t ) (1 i t )(1 )]P t0 (1 )
n1
u
n 0, 1, 2, . . . n 1, 2, . . . .
t 0
The second set of equations in equations (53) says that all of the vintage user costs are proportional to the user cost for a new asset. This proportionality means that we do not have to use an index number formula to aggregate over vintages to form a capital services aggregate. To see this, using (53), the period-t services aggregate S t defined earlier by equations (38) can be rewritten as follows: (54)
S t u t0 K t0 u t2 K t2 . . . u t0 [K t0 (1 )K t1 (1 )2K t2 . . .] u t0 K tA
where the period t capital aggregate K tA is defined as (55)
K tA K t0 (1 )K t1 (1 )2K t2 . . .
If the depreciation rate and the vintage capital stocks are known, then K tA can readily be calculated using (55). Then using the last line of equations (54), we see that the value of capital services (summed over all ages), S t, decomposes into the price term u t0 times the quantity term K tA. Hence, it is not necessary to use an index number formula to aggregate over ages of the asset using this depreciation model. A similar simplification occurs when calculating the wealth stock using this depreciation model. Substitution of equation (52) into equations (13) leads to the following formula for the sequence of period-t asset prices by age n: (56)
P tn (1 )n1P t0
n 1, 2, . . . .
Equations (56) say that all of the period-t asset prices are proportional to the price of a new asset. This proportionality means that, again, we do not have to use an index number formula to aggregate over vintages to form a capital stock aggregate. To see this, using equations (56), the period-t wealth aggregate W t defined earlier by equations (37) can be rewritten as follows:
Measurement of Capital Services and Asset Price Changes
(57)
515
W t P t0 K t0 P t1 K t1 P t2 K t2 . . . P t0 [K t0 (1 )K t1 (1 )2K t2 . . .] P t0 K tA
where K tA was defined by equation (55). Thus, K tA can serve as either a capital stock aggregate or a flow-of-services aggregate, which is a major advantage of this model.53 There is a further simplification of the model that is useful in applications. If we compare equation (55) for period t 1 and period t, we see that the following formula results using definitions (39): (58)
t1 K t1 (1 )K tA. A K0
Thus, the period-t 1 aggregate capital stock, K t1 A , is equal to the investment in new assets that took place in period t, which is K t1 0 , plus 1 – times the period-t aggregate capital stock, K tA. This means that, given a starting value for the capital stock, we can readily update it just using the depreciation rate and the new investment in the asset during the prior period. We now need to address the problem of determining the depreciation rate for a particular asset class. Matheson (1910, pp. 69–91) was perhaps the first engineer to address this problem. On the basis of his experience, he simply postulated some approximate rates that could be applied, ranging from 3 to 20 percent. The algebra corresponding to Matheson’s method for determining was explicitly described by the accountant Canning (1929, p. 276). Let the initial value of the asset be V0 , and let its scrap value n years later be Vn . Then V0 , Vn , and the depreciation rate are related by the following equation: (59)
Vn (1 )nV0 .
Canning goes on to explain that 1 – may be determined by solving the following equation: (60)
log Vn log V0 log(1 ) . n
It is clear that Matheson used this framework to determine depreciation rates even though he did not lay out formally the above straightforward algebra. However, Canning (1929, p. 276) pointed out that the scrap value, Vn , which is not determined very accurately from an a priori point of view, is 53. This advantage of the model has been pointed out by Jorgenson (1996b, 1989) and his coworkers. Its early application dates back to Jorgenson and Griliches (1967) and Christensen and Jorgenson (1969, 1973).
516
W. Erwin Diewert
the tail that is wagging the dog; that is, this poorly determined value plays a crucial role in the determination of the depreciation rate. An effective response to Canning’s (1929) criticism of the declining balance method of depreciation did not emerge until relatively recently, when Hall (1971), Beidelman (1973, 1976), and Hulten and Wykoff (1981a,b) used an entire array of used asset prices at point in time in order to determine the geometric depreciation rate that best matched up with the data.54 Another theoretical possibility would be to use information on rental prices by age of asset in order to deduce the depreciation rate.55 This brings us to our next problem: how should we convert our estimated asset lives of thirty-nine years for structures and fourteen years for machinery and equipment into comparable geometric rates? One possible method for converting an average asset life—L periods, say—into a comparable geometric depreciation rate is to argue as follows. Suppose that we believe that the straight-line model of depreciation is the correct one and the asset under consideration has a useful life of L periods. Suppose further that investment in this type of asset is constant over time at one unit per period and asset prices are constant over time. Under these conditions, the long-run equilibrium capital stock for this asset would be56 (61)
L1 1 L(L 1) L2 2 L1 1 . . . . L L 2L 2 L L
Under the same conditions, the long-run equilibrium geometric depreciation capital stock would be equal to the following sum: (62)
1 1 1 (1 ) (1 )2 . . . . 1 (1 )
Now find the depreciation rate that will make the two capital stocks equal; that is, equate equation (61) to equation (62) and solve for . The resulting is (63)
2 . L1
Obviously, there are a number of problematical assumptions that were made in order to derive the depreciation rate that corresponds to the 54. Jorgenson (1996a) has a nice review of most of the empirical studies of depreciation. It should be noted that Beidelman (1973, 1976) and Hulten and Wykoff (1981a, 1996, p. 22) showed that equation (59) must be adjusted to correct for the early retirement of assets. The accountant Schmalenbach (1959, p. 91; first German edition published in 1919) also noticed this problem. 55. This possibility is mentioned by Hulten and Wykoff (1996, p. 15). 56. Recall equations (48), which imply that the vintage asset prices are proportional. Hence, Hicks’s Aggregation Theorem will imply that the capital aggregate will be the simple sum on the left-hand side of equation (61).
Measurement of Capital Services and Asset Price Changes
517
length of life L,57 but equation (63) gives us at least a definite method of conversion from one model to the other. Since we assumed that the average length of life for nonresidential construction was L equal to thirty-nine years, applying the conversion formula given by equation (63) implies that NR equals .05; that is, we assume that the declining balance or geometric depreciation rate for nonresidential construction in Canada is 5 percent. Similarly, our assumed life of fourteen years for machinery and equipment translates into a ME equal to a 131⁄3 percent geometric depreciation rate for this asset class. There is one remaining problem to deal with, and then we can proceed to table the results for three geometric depreciation models for Canada. The problem is this: before 1926, we do not have reliable investment data, but the effects of investments made prior to 1926 live on forever in the infinite-lived geometric depreciation model that we considered in equations (54) to (58) above. In the case of machinery and equipment investments made before 1926, by the time we get to 1965, what is left of the original investments is negligible. However, in the case of a $1,000 investment in nonresidential structures made in 1925, $128.50 of it would still be available as a productive input in 1965, assuming a 5 percent geometric depreciation rate. Hence, we need a method for estimating the geometric capital stock that is available at the start of 1926 in order to not bias downward our estimates of the geometric capital stock for nonresidential construction for the period 1965–99. We decided to assume that nonresidential investment for the period prior to 1926 grew at the same rate that it grew during the years 1926–99.58 Thus, for the years 1927 to 1999, we took investment in nonresidential construction during the current year divided by the corresponding investment in the prior year (both in constant dollars) as our dependent variable and regressed this variable on a constant. The estimated constant turned out to be 1.0509. Hence, for the period prior to 1926, we assumed that investments in nonresidential construction grew at the rate g 1926 .05 (i.e., a 5 percent growth rate). Thus if INR was the investment in 1926, we assumed that the investments in prior years were 57. The two assumptions that are the least justified are (a) the assumption that the straightline depreciation model is the correct model to do the conversion and (b) the assumption that investment has been constant back to minus infinity. Hulten and Wykoff (1996, p. 16) made the following suggestions for converting an L into a : “Information is available on the average service life, L, from several sources. The rate of depreciation for non-marketed assets can be estimated using a two step procedure based on the ‘declining balance’ formula X/L. Under the ‘double declining balance’ formula, X 2. The value of X can be estimated using the formula X L for those assets for which these estimates are available. In the HultenWykoff studies, the average value of X for producer’s durable equipment was found to be 1.65 (later revised to 1.86). For nonresidential structures, X was found to be 0.91. Once X is fixed, follows for other assets whose average service life is available.” 58. This method for obtaining a starting value for the geometric capital stock is due to Kohli (1982); see also Fox and Kohli (1998).
518
W. Erwin Diewert 1926 1926 1926 INR INR INR , 2 , 3 , . . . . 1 g (1 g) (1 g)
(64)
Using assumption (64), we can calculate an estimate of the starting capital stock for nonresidential construction at the start of 1927 as (65)
. . . 1g 1 1 [(1 )/(1 g) 1g , g
1 1 1927 1926 K NR I NR 1 1g 1g
2
1
3
1926 INR
1926 INR
where g .05 and .05. Now we can use formula (58), starting at the year t 1927, to build up the capital stock for each of our two asset classes. For nonresidential construction, our starting 1927 capital stock was defined by equations (65), and for machinery and equipment, it was simply the 1926 investment in machinery and equipment—I 1926 ME , say. At this point, we can proceed in much the same manner as in the previous section. We have already explained how we can use equations (58) to form the aggregate capital stocks for nonresidential construction and machinery and equipment. From equations (57), it can be seen that the corresponding capital stock price is P t0, the price of a new vintage at the beginning of year t. These series, along with their annual average (geometric) growth rates, can be found in table 11 of Diewert (2004) at five-year intervals. In this seventh model, having constructed the aggregate price and quantity of nonresidential capital, P tNR and K tNR respectively, and the aggregate price and quantity of machinery and equipment, P tME and K tME respectively, we may again use the Fisher Ideal formula and aggregate these two series into a single aggregate price and quantity series for the wealth stock, which we denote by P (7) t and K (7) t. Comparing the capital stock prices for model 7 with those of model 4, we find that these numbers are exactly the same. This is because in both the straight-line depreciation model and the geometric model, the price of a new asset acts as the aggregate stock price over all vintages. However, when we use the Fisher formula to aggregate the two types of capital prices together to get either P (4) or P (7), we get slightly different numbers because the aggregate quantities of the two types of asset differ in the two models. The Fisher Ideal aggregate price for these two capital stock components increased from 1 to 3.6243 over this period. The price of a unit of nonresidential construction capital increased by 5.08 percent per year, and the price of a unit of machinery and equipment capital increased by only 1.37 percent per year on average for model 7. The average rate of price increase
Measurement of Capital Services and Asset Price Changes
519
for the model 7 capital aggregate was 3.86 percent per year. This should be compared to the average rate of price increase for the one-hoss-shay capital aggregate, which was much higher at 4.35 percent per year. On the quantity side, the stock of nonresidential construction capital increased from $32.8 billion to $115.9 billion (constant 1965) Canadian dollars, for an annual average (geometric) growth rate of 3.78 percent (3.85 percent for the straight-line model), while the stock of machinery and equipment capital increased from $19.1 billion to $199.7 billion (constant 1965) Canadian dollars, for an annual average growth rate of 7.15 percent (7.19 percent for the straight-line model). The model 7 declining balance capital aggregate grew at an annual average growth rate of 4.85 percent. The corresponding aggregate growth rates for the one-hoss-shay and straight-line models were 4.95 percent and 4.88 percent per year, respectively. We turn now to the service flow part of our seventh model, where we assume that producers exactly anticipate the asset rates of price change, i tNR and i tME, for nonresidential construction and for machinery and equipment, respectively; these ex post rates are listed in table A2 of Diewert (2004). The user cost for a new asset at the start of period t, u t0, is defined in equations (53). Equations (54) show that this user cost matches up with the corresponding aggregated over ages capital stock so the computations are simplified in this model. Denote these user costs by u tNR and u tME for our two assets and denote the corresponding service aggregates by k tNR and k tME, respectively. We renormalize these series so that both user costs are unity in 1965.59 These series, along with their annual average (geometric) growth rates, can be found in table 12 of Diewert at five-year intervals. Having constructed the aggregate price and quantity of nonresidential capital services, u tNR and k tNR respectively, and the aggregate price and quantity of machinery and equipment services, u tME and k tME respectively, we may again use the Fisher Ideal formula and aggregate these two series into a single aggregate price and quantity series for capital services, which we denote by u (7) t and k(7) t.60 Comparison of the declining balance growth rates with the corresponding straight-line growth rates shows that there are some substantial differences. For example, the average annual geometric rate of growth for the user cost of machinery and equipment was 3.40 percent per year for the straight-line model versus 2.75 percent per year for the geometric model. The geometric model rate of capital services price growth of 4.51 percent per year should be compared to the straight-line model rate of capital services price growth of 4.96 percent per year, which in turn can be compared to the average rate of price increase for the one-hoss-shay capital services 59. Before normalization, the service flow aggregates k tNR and k tME are exactly equal to the corresponding stock aggregates. Thus the rates of growth of the corresponding stock and flow variables will be the same. 60. These series are plotted in figures 12.7 and 12.10 in section 12.11.
520
W. Erwin Diewert
aggregate, which was somewhat higher at 4.85 percent per year. The use of ex post asset inflation rates again leads to user costs that are extremely volatile; see figure 12.7 below. On the quantity side, the model 7 flow of nonresidential construction capital services increased from $1,916 million to $6,764 million (constant 1965) Canadian dollars, for an annual average (geometric) growth rate of 3.78 percent, while the flow of machinery and equipment capital services increased from $3,069 million to $32,069 million (constant 1965) Canadian dollars, for an annual average growth rate of 7.15 percent. The capital services aggregate grew at an annual average growth rate of 5.55 percent, compared to the 4.85 percent annual average growth rate for the aggregate capital stock. The geometric model average rate of capital services growth rate of 5.55 percent per year can be compared to the straight-line growth rate of capital services of 5.30 percent per year and to the average rate of growth for the one-hoss-shay capital services aggregate of 5.61 percent per year. We turn now to our second geometric depreciation model, which will eliminate the volatility problem mentioned in the last paragraph. In this model 8, instead of assuming that producers correctly anticipate each year’s ex post asset rates of price change, it is assumed that producers use the current CPI inflation rate as estimators of anticipated asset price change. In terms of computations, we simply replace the two ex post asset rates of price change, i tNR and i tME, with the CPI inflation rate t listed in table A2 of Diewert (2004) and then repeat all of the computations made to implement model 7 above. It turns out that the model 8 constant real interest rate capital stocks (and prices) are exactly equal to their model 7 counterparts in table 11 of Diewert (2004). This follows from equations (57), which show that the aggregate (over ages) stock price is equal to the price of a new asset, which in turn does not depend on our assumptions about interest rates or expected asset inflation rates. Hence, there is no need to table the capital stocks and prices for model 8 (or model 9 below). However, the model 8 vintage user costs and capital service flows are very different from their model 2 counterparts and slightly different from their model 5 counterparts. Table 13 in Diewert (2004) lists the model 8 rental prices and flows of capital services for the geometric depreciation (constant real interest rate) Canadian capital stocks at five-year intervals over the period 1965–99. The overall annual rate of growth for capital services for the straight-line model was 5.08 percent per year, compared to 5.37 percent per year for the geometric model, where both models assumed constant real interest rates. This is not a large difference. In figures 12.7 and 12.8 in section 12.11, it can be seen that the user costs that correspond to the geometric model with constant real interest rates, u (8), is much less volatile than the corresponding geometric model that assumes perfect foresight, u (7). We turn now to our third geometric depreciation model, which we call
Measurement of Capital Services and Asset Price Changes
521
model 9. In this model, instead of assuming that producers correctly anticipate each year’s ex post asset rates of price change, we assume that they can anticipate the trends in these rates. In terms of computations, we use exactly the same program that we used to implement model 7 except that we replace the rather volatile nominal interest rate r t that was listed in table A2 of Diewert (2004) with the smoothed nominal interest rate that is listed in table A3 of Diewert. We also replace the two ex post asset rates of price change, i tNR and i tME, with their smoothed counterparts listed in table A3 of Diewert. As mentioned earlier, the model 9 constant real interest rate capital stocks (and prices) are exactly equal to their model 7 counterparts in table 7 in Diewert (2004). Hence, there is no need to table the capital stocks and prices for model 9. However, the model 9 vintage user costs are somewhat different from their model 7 and 8 counterparts. Table 14 in Diewert lists the model 9 rental prices and flows of capital services for the Canadian capital stock at five-year intervals over the period 1965–99. When we compare the two capital services, k tNR and k tME, across the three declining balance models, they turn out to be identical; hence, so are their growth rates. Hence, when we aggregate across these two assets to form the model 7, 8, and 9 capital services aggregates, we find that the average annual geometric growth rates are quite similar: 5.55 percent, 5.37 percent, and 5.52 percent, respectively. However, the corresponding rental price series for each type of asset, u tNR and u tME, are no longer identical across the two models. The geometric aggregate rental price grew at an annual geometric rate of 3.88 percent per year, while the straight-line aggregate rental price grew at a 4.31 percent per year rate. In figures 12.7 and 12.9 in section 12.11, it can be seen that the user cost that corresponds to the geometric model with smoothed asset inflation rates, u (9), is much less volatile than the corresponding geometric model that assumes perfect foresight, u (7), but the trend in each series is similar. We turn now to our fourth and final class of depreciation and relative efficiency models. 12.10 The Linear Efficiency Decline Model Recall that our first class of models (the one-hoss-shay models) assumed that the efficiency (or cross-section user cost) of the asset remained constant over the useful life of the asset. In our second class of models (the straight-line depreciation models), we assumed that the cross-section depreciation of the asset declined at a linear rate. In our third class of models (the geometric depreciation models), we assumed that cross-section depreciation declined at a geometric rate. Comparing the third class with the second class of models, it can be seen that geometric depreciation is more accelerated than straight-line depreciation; that is, depreciation is relatively
522
W. Erwin Diewert
large for new vintages compared to older ones. In this section, we will consider another class of models that gives rise to an accelerated pattern of depreciation: the class of models that exhibit a linear decline in efficiency.61 It is relatively easy to develop the mathematics of this model. Let f t0 be the period-t rental price for an asset that is new at the beginning of period t. If the useful life of the asset is L years and the efficiency decline is linear, then the sequence of period-t cross-sectional user costs f tn is defined as follows:
Ln f tn f t0 L
(66)
0
n 0, 1, 2, . . . , L 1; n L, L 1, L 2, . . . .
Now substitute equations (66) into the first equation in equations (5) and get the following formula for the rental price f t0 in terms of the price of a new asset at the beginning of year t, P t0: (67)
LP t0 f t0 , L (L 1)( t ) (L 2)( t )2 . . . 1( t ) L1
where the period-t discount factor t is defined in terms of the period-t nominal interest rate r t and the period-t expected asset rate of price change i t in the usual way: 1 it t t . 1r
(68)
Now that f t0 has been determined, substitute equation (67) into equations (66), substitute the resulting equations into equations (5), and determine the sequence of period-t asset prices by age n, P tn: (69)
(L n) (L n 1)(t) . . . 1(t)L1n P tn P t0 L (L 1)(t) . . . 1(t)L1
for n 0, 1, 2, . . . , L 1 0
for n L, L 1, L 2, . . . .
Finally, use equations (8) to determine the end-of-period-t rental prices, u tn, in terms of the corresponding beginning-of-period-t rental prices, f tn: (70)
u tn (1 r t) f tn
n 0, 1, 2, . . .
Given the asset prices by age n defined by equations (69), we could use equations (12) above to determine the corresponding cross-section depre61. Diewert (2004) showed how linear efficiency decline models can be derived from onehoss-shay models where maintenance expenditures are expected to increase linearly over time.
Measurement of Capital Services and Asset Price Changes
523
ciation rates tn. We will not table these depreciation rates, since our focus is on constructing measures of the capital stock and of the flow of services that the stocks yield. However, we will note that if we recall definition (10) for the period-t cross-section depreciation of an asset of vintage n, D tn P tn – P tn1, and assume that the nominal interest rate r t and the nominal asset rate of price change i t are both zero, then using equations (69), it can be shown that (71)
Ln D tn P tn P tn1 P t0 L(L 1)/2
for n 0, 1, 2, . . . , L;
that is, when r t i t 0, depreciation declines at a linear rate for the linear efficiency decline model. When depreciation declines at a linear rate, the resulting formula for depreciation is called the sum of the year digits formula.62 Thus, just as the one-hoss-shay and straight-line depreciation models coincide when r t i t 0, so too do the linear efficiency decline and sum of the year digits depreciation models. In our tenth model, we assume that producers exactly anticipate the asset rates of price change, i tNR and i tME, for nonresidential construction and for machinery and equipment, respectively. We use the Fisher Ideal index to aggregate over ages using formula (69) above for the asset prices by age. Having constructed the aggregate price and quantity of nonresidential capital, P tNR and K tNR respectively, and the aggregate price and quantity of machinery and equipment, P tME and K tME respectively, we may again use the Fisher Ideal formula and aggregate these two series into a single aggregate price and quantity series for the wealth stock, which we denote by P (10) t and K (10) t. The average rate of price increase for the linear efficiency decline capital stock aggregate was 4.13 percent per year, which is lower than the corresponding rate of aggregate price increase for the one-hoss-shay aggregate of 4.35 percent per year; see table 15 in Diewert (2004). On the quantity side, the stock of nonresidential construction capital increased from $29.6 billion to $98.5 billion (constant 1965) Canadian dollars, for an annual average (geometric) growth rate of 3.60 percent, while the stock of machinery and equipment capital increased from $15.0 billion to $166.6 billion (constant 1965) Canadian dollars, for an annual average growth rate of 7.33 percent. Of course, the levels of the capital aggregate are only about two-thirds to three-quarters of the corresponding one-hoss-shay levels due to the accelerated form of depreciation for the former model. The linearly declining efficiency capital aggregate grew at an annual average growth rate of 4.74 percent, which is lower than the corresponding rate of growth for the one-hoss-shay aggregate of 4.95 percent. Using equations (66), (67), and (70) along with the data tabled in tables 62. Canning (1929, 277) describes the method in some detail, so it was already in common use by that time.
524
W. Erwin Diewert
A1 and A2 of Diewert (2004), we can construct the end-of-period user costs for each of our thirty-nine types of nonresidential construction capital. As usual, use equation (38) to construct the service flow aggregate for nonresidential construction for each year. Then we use equation (42), where L equals 39, to aggregate over the thirty-nine types of nonresidential capital using the Fisher Ideal index number formula and form the aggregate rental price for nonresidential construction, u tNR, and the corresponding services aggregate, k tNR, for the years 1965–99.63 These series, along with their annual average (geometric) growth rates, can be found in table 16 of Diewert at five-year intervals. Similarly, we use equation (42), where L equals 14, and aggregate over the fourteen types of machinery and equipment using the Fisher Ideal index number formula and form aggregate capital services price and quantity series, u tME and k tME, for the years 1965–99. These series, along with their annual average (geometric) growth rates, can also be found in table 16 of Diewert at five-year intervals. Having constructed the aggregate price and quantity of nonresidential capital services, u tNR and k tNR respectively, and the aggregate price and quantity of machinery and equipment services, u tME and k tME respectively, we may again use the Fisher Ideal formula and aggregate these two series into a single aggregate price and quantity series for capital services, which we denote by u (10) t and k(10) t.64 Table 16 in Diewert (2004) shows that the price of a unit of nonresidential construction capital services increased by 6.32 percent per year and the price of a unit of machinery and equipment capital services increased by 2.54 percent per year on average. The average rate of price increase for the linearly declining efficiency capital services aggregate was 4.32 percent per year, which is much less than the corresponding rate of price increase for the one-hoss-shay aggregate capital services price, which was 4.85 percent per year. On the quantity side, the flow of nonresidential construction capital services increased from $2,066 million to $7,467 million (constant 1965) Canadian dollars, for an annual average (geometric) growth rate of 3.85 percent, while the flow of machinery and equipment capital services increased from $3,162 million to $33,554 million (constant 1965) Canadian dollars, for an annual average growth rate of 7.19 percent. The capital services aggregate grew at an annual average growth rate of 5.56 percent, compared to the 5.61 percent annual average growth rate for the corresponding one-hoss-shay capital services. As usual, the linear efficiency decline user costs u (10) that are based on the assumption of perfect foresight are very volatile; see figure 12.7 in section 12.11. We turn now to our second linear efficiency decline model, which will 63. Since all of the rental prices by age of asset are proportional to each other, again Hicks’s (1939, p. 312–13) aggregation theorem implies that all of the usual indexes are equal to each other. 64. These series are plotted in figures 12.7 and 12.10.
Measurement of Capital Services and Asset Price Changes
525
eliminate the volatility problem mentioned in the last paragraph. In this model 11, instead of assuming that producers correctly anticipate each year’s ex post asset rates of price change, it is assumed that producers use the current CPI inflation rate as estimators of these rates. This model turns out to be equivalent to the constant real interest rate model. As usual, in terms of computations, we simply replace the two ex post asset rates of price change, i tNR and i tME, with the CPI inflation rate t listed in table A2 of Diewert (2004) and then repeat all of the computations made to implement model 10 above. The model 11 capital stock quantities are very similar to the model 10 quantities. The overall average growth rate for the price of the aggregate stock is a bit higher for model 10 (4.13 percent per year) than for model 11 (3.94 percent per year). The one-hoss-shay capital services aggregate that assumes constant real interest rates, k(2), is quite close to the linear efficiency decline capital services aggregate that assumes constant real interest rates, k(11), and their average annual geometric growth rates are also close: 5.49 percent for k(2) versus 5.43 percent for k(11). However, k(11) is 15 to 20 percent bigger in levels than the first linear efficiency decline capital services aggregate k(10), which assumed that anticipated asset inflation rates were equal to ex post rates. The average annual geometric growth rate for k(10) was somewhat higher, at 5.56 percent per year. We turn now to our third linear efficiency decline model. In this model (model 12), instead of assuming that producers correctly anticipate each year’s ex post asset rates of price change, we assume that they can anticipate the trends in these rates. In terms of computations, we use exactly the same program that we used to implement model 10 except that we replace the rather volatile nominal interest rates r t that are listed in table A2 of Diewert (2004) with the smoothed nominal interest rates that are listed in table A3 of Diewert. We also replace the two ex post asset inflation rates, itNR and itME, with their smoothed counterparts listed in table A3 of Diewert. It turns out that there are some small differences between the capital stocks generated by our three variants of the linear efficiency decline model, but the average growth rates are virtually identical. There is more variation across the three models in the movement of the stock prices, with model 10 giving the highest rate of price growth for the capital aggregate (4.13 percent per year), followed by model 12 (4.04 percent per year) and then model 11 (3.94 percent per year). However, there are large differences in the levels and small differences in the growth rates for capital services generated by the three models: the average annual geometric growth rates for k(10), k(11), and k(12) are 5.56 percent, 5.43 percent, and 5.55 percent per year. The average annual geometric growth rates for K (10), K (11), and K (12) are 4.74 percent, 4.72 percent, and 4.74 percent per year, respectively. However, there is much more variation across the three models in the
526
W. Erwin Diewert
movement of the service prices, with model 10 giving the highest rate of price growth for the capital services aggregate (4.32 percent per year), followed by model 12 (3.78 percent per year) and then model 11 (3.27 percent per year). When we view figures 12.7–12.9 in section 12.11, the aggregate linear efficiency decline user cost series u (10), which assumes that anticipated asset inflation rates are equal to the actual ex post rates, is the highest very volatile curve. Smoothing these volatile asset inflation rates leads to the u (12) curve, which is much smoother and captures the trend in u (10). The constant real interest rate user cost series, u (11), lies far below the other two aggregate user cost series for much of the sample period. Figures 12.10–12.12 in section 12.11 plot the three linear efficiency decline aggregate capital services series, k(10)–k(12). Each of these series is reasonably smooth, but note that they are spread out much more than the corresponding aggregate capital stock series, K (10)–K (12), that are plotted in figures 12.4–12.6. Thus, the different assumptions on anticipated asset price movements generate substantially different measures of capital services for these linear efficiency decline models. The constant real interest rate series, k(11), is the top curve, followed by the smoothed asset inflation rates model, k(12), and the ex post asset inflation rates model, k(10), is the lowest curve. In the following section, we make some graphical comparisons across our twelve models. 12.11 A Comparison of the Twelve Models In this section, we will compare stock prices and user costs across our four types of model that are based on alternative assumptions about the structure of depreciation or asset efficiency, holding constant our assumptions about nominal interest rates and anticipated asset price movements. We will also compare capital stocks and service flows across depreciation and relative efficiency models, holding constant our assumptions about nominal interest rates and anticipated asset price movements. Figure 12.1 plots the aggregate capital stock prices generated by our four depreciation and efficiency models, assuming that ex post asset price movements are perfectly anticipated. Note the volatility of these series. The onehoss-shay stock prices P (1) are the highest, followed by the linear efficiency decline prices P (10). The straight-line and geometric depreciation prices, P (4) and P (7), are the lowest and are very close to each other. Figure 12.2 plots the aggregate capital stock prices generated by our four depreciation and efficiency models, assuming that ex post asset price changes are equal to changes in the CPI. This model assumes a constant real interest rate of 4 percent. These stock prices are much smoother than those exhibited in figure 12.1, and they are also much closer to each other.
Measurement of Capital Services and Asset Price Changes
Fig. 12.1
Capital stock prices using ex post price changes
Fig. 12.2
Alternative stock prices with constant real interest rates
527
The one-hoss-shay and linear efficiency decline prices, P (2) and P (11), are virtually indistinguishable on the top, followed by the straight-line depreciation prices P (5) and then followed very closely by the geometric stock prices P (8). Figure 12.3 plots the aggregate capital stock prices generated by our four depreciation and efficiency models, assuming that anticipated asset price changes are equal to smoothed ex post asset price changes. These stock price series smooth out considerably the much rougher series exhibited in
528
W. Erwin Diewert
Fig. 12.3
Alternative stock prices using smoothed asset price changes
figure 12.1. The one-hoss-shay stock prices P (3) are the highest, followed by the linear efficiency decline prices P (12). The straight-line and geometric depreciation prices, P (6) and P (9), are the lowest and are very close to each other. Figure 12.4 plots the aggregate capital stocks that correspond to the perfectly anticipated asset prices assumption for the four depreciation models. The one-hoss-shay capital stock curve K (1) is the highest, followed by the straight-line depreciation curve K (4), which in turn is followed by the geometric depreciation curve K (7). The linear efficiency decline stock K (10) is the lowest curve. These results are intuitively plausible: the onehoss-shay model has the least accelerated form of depreciation, followed by the straight-line model, followed by the geometric depreciation model, and the linear efficiency decline model generates the most accelerated form of depreciation. In an economy where investment is growing over time, the capital stocks corresponding to the least accelerated form of depreciation will grow the quickest, followed by the more accelerated forms, and the capital stock corresponding to the most accelerated form of depreciation will grow the slowest. Figures 12.5 and 12.6 plot the aggregate capital stocks that correspond to the constant real interest rate and the smoothed asset price models: the results are much the same as those exhibited in figure 12.4. Figure 12.7 plots the aggregate user costs generated by our four classes of depreciation and efficiency models, assuming that ex post asset price movements are perfectly anticipated. Note that the user cost series in figure 12.7 are even more volatile than the capital stock prices charted in figure 12.1. The one-hoss-shay and straight-line depreciation user costs, u (1) and u (4), are the highest, followed by the geometric depreciation and linear efficiency decline user costs, u (7) and u (10). Figure 12.8 plots the aggregate user costs generated by our four classes
Fig. 12.4
Alternative capital stocks using ex post price changes
Fig. 12.5
Alternative capital stocks using constant real rates
Fig. 12.6
Alternative capital stocks using smoothed asset price changes
530
W. Erwin Diewert
Fig. 12.7
Alternative user costs using ex post asset price changes
of depreciation and efficiency models, assuming that ex post asset price changes are equal to changes in the consumer price index. This model assumes a constant real interest rate of 4 percent. These user costs are much smoother than those exhibited in figure 12.7, and they are also much closer to each other. The straight-line depreciation user costs u (5) are on top, followed by the one-hoss-shay, geometric, and linear efficiency decline user costs, u (2), u (8), and u (11), which are too close to each other to be distinguished visually. Figure 12.9 plots the aggregate user costs generated by our four classes of depreciation and efficiency models, assuming that anticipated asset price changes are equal to smoothed ex post asset price changes. These user cost series smooth out considerably the much rougher series exhibited in figure 12.7. The straight-line and one-hoss-shay user costs, u (6) and u (3), are very close to each other on top; but near the end of our sample period, the one-hoss-shay user costs u (3) dip below the straight-line depreciation user costs u (6). The geometric depreciation and linear efficiency decline user costs, u (9) and u (12), are fairly close to each other on the bottom. These two models represent the most accelerated forms of depreciation. Figures 12.10 and 12.11 plot the aggregate capital services that correspond to the perfectly anticipated asset price change and the constant real interest rate models. The aggregate services using ex post asset price changes plotted in figure 12.10 are more volatile and more widely dispersed than the aggregate services plotted in figures 12.11 and 12.12, as one might expect. The linear efficiency decline services are the top curve k(10), followed by the geometric depreciation services k(7), followed by the onehoss-shay services k(1), and the straight-line depreciation capital services k(4) are the bottom curve. The aggregate services using constant real
Fig. 12.8
Alternative user costs using constant real interest rates
Fig. 12.9
Alternative user costs using smoothed asset price changes
Fig. 12.10
Alternative capital services using ex post price changes
532
W. Erwin Diewert
Fig. 12.11
Alternative capital services using constant real rates
Fig. 12.12
Alternative capital services using smoothed asset price changes
interest rates plotted in figure 12.11 are fairly similar to the smoothed capital services exhibited in figure 12.12. For the constant real interest rate services in figure 12.11, the one-hoss-shay and linear efficiency decline services, k(2) and k(11), are at the top, followed very closely by the geometric depreciation services k(8), and the straight-line depreciation capital services k(5) are the bottom curve. Figure 12.12 plots the aggregate capital services that correspond to the smoothed asset price change model; that is, figure 12.12 is the quantity counterpart to figure 12.9. The linear efficiency decline capital services curve k(12) is the highest, followed closely by the geometric depreciation and one-hoss-shay curves, k(9) and k(3), which are very close to each other. The straight-line depreciation curve k(6) is the lowest curve and is well below the other three curves. Thus, overall, three
Measurement of Capital Services and Asset Price Changes
533
of our four depreciation and efficiency models give rise to much the same measures of capital services, holding constant the assumptions about asset price changes and the reference interest rate. However, the straight-line depreciation capital services seem to be consistently below the corresponding services generated by the other three classes of models. Our conclusion at this point is that both the form of depreciation that is assumed (lightbulb, straight line, geometric, or linear efficiency decline) and the assumptions on interest rates and price expectations (perfect foresight, constant real rate, or anticipated capital gains) matter. This means it will be necessary for statistical agencies to introduce surveys to determine when assets are retired or sold, and it will be necessary for economists to decide what is the best set of assumptions concerning the nominal opportunity cost of capital and anticipated asset price changes. 12.12 The Treatment of Intangible Assets Since this volume is primarily concerned with the treatment of intangible assets, we devote this section to indicating how the above treatment of tangible assets can be modified to deal with intangible assets. Examples of expenditures on intangible assets are advertising and marketing expenses and research and development expenditures. Both of these categories of expenditures have the character that the present-period outlays will create incremental revenues in the future for the firm that undertakes them. These current-period expenditures on intangible assets have a different character from expenditures on tangible durable inputs, which can be used for a number of periods and then sold to other users.65 The problem in this section is to determine how to allocate the cost outlays on intangible investments over future periods. Thus, the accounting problems in the present section have a different character from that of the previous sections, where a straightforward opportunity cost approach was used. In the present section, the approach taken is one of matching current costs with future expected revenues.66 To fix ideas, suppose that in period t, a firm has made expenditures on creating an intangible asset, which are equal to C t: M
C t ∑ P tm Q tm,
(72)
m1
t m
where P is the period-t price for the mth type of input that is used to create the intangible asset and Q tm is the corresponding quantity purchased. 65. In many cases, the stream of future revenues created by an intangible investment can be sold on the marketplace (e.g., patents, trademarks, and franchises), but this still does not solve the problem of how to distribute the intangible investment costs over future periods. 66. Paton and Littleton (1940, 123) argued that the primary purpose of accounting is to match costs and revenues. For an excellent early discussion on the importance of matching costs to future revenues, see Church (1917, p. 193).
534
W. Erwin Diewert
These expenditures in period t are expected to generate a future stream of incremental revenues for the firm. Let R t0 denote the immediate period-t incremental revenues (which could be zero), and let R tn denote the incremental revenues that the period-t expenditures C t are expected to generate n periods from the present period t, for n 1, 2, . . . . Let r t be the (nominal) period-t opportunity cost of financial capital.67 Then the discounted value of these expected incremental revenues is (73)
R t1 R t2 R t3 R t R t0 t .... 1r (1 r t) 2 (1 r t) 3
The problem is to allocate the current-period cost C t over future periods. Thus, let C tn be the allocation of C t to the accounting period that is n periods after period t for n 0, 1, 2, . . . . At first sight, it seems reasonable that these future cost allocations C tn should sum to C t. However, this turns out not to be so reasonable: costs that are postponed to future periods must be escalated by the (nominal) interest rate r t, so that the present value of discounted future costs is equal to the actual period-t costs C t. Thus, the intertemporal cost allocations Ctn should satisfy the following equation: (74)
C t1 C t2 C t3 C t C t0 t .... 1r (1 r t) 2 (1 r t) 3
To see why discounting is necessary, consider the following simple example where we invest C t during the present period and anticipate the revenue R t2 two periods from now. The expected discounted profits that this investment will generate are (75)
R t2 C t . (1 r t ) 2
The period-by-period cash flows for this project are –C t, 0, R t2. We want to match the period t cost C t with the period-t 2 revenue flows. Thus, we want to convert the cash flow stream –C t, 0, R t2 into an equivalent cash flow stream 0, 0, – C t2 R t2. If we choose (76)
C t2 C t (1 r t ) 2,
then it can be seen that these two cash flow streams have the same present value and C t2 is the “right” period-t 2 cost allocation. Put another way, if we simply carried forward the period-t costs C t and set C t2 equal to C t, we would be neglecting the fact that the costs took place in period t while the return on the investment was deferred until period t 2; hence, we need to charge the opportunity cost of financial capital for two periods on the initial investment (for two periods) until it is expensed in period t 2. How should the intertemporal cost allocations C tn be chosen? It is natu67. Thus, for simplicity, we are making assumption (4) in section 12.2.
Measurement of Capital Services and Asset Price Changes
535
ral to make these cost allocations proportional to the corresponding period anticipated revenues. Thus, choose the number so that the following equation is satisfied: (77)
C t R t.
Thus we set the observed period-t cost associated with the intangible investment C t equal to the constant times the discounted value of the anticipated incremental revenue stream R t that the investment is expected to yield.68 Typically, will be equal to or less than one, since otherwise the periodt intangible investment expenditures C t should not be undertaken. If is less than one, then there will be an expected profit above the opportunity cost of capital, which could be some form of monopoly profit or a reward for risk taking. Once has been determined by solving equation (77), then the intertemporal cost allocations C tn can be defined to be proportional to the corresponding anticipated incremental revenues Rtn for future periods: (78)
C tn R tn
n 0, 1, 2, . . . .
At this point, it is possible to use the algebra developed in sections 12.2 and 12.3 above with some slight modifications. We can convert the nominal cost allocation factors C tn into constant- (period-t) dollar cost allocations f tn as follows: (79)
C tn f tn (1 t) n
n 0, 1, 2, . . .
Rtn (1 t) n where t is the period-t consumer price inflation rate, which is expected to persist into the future.69 The f tn s defined by equations (79) are the counterparts to the period-t cross-sectional rental prices that were defined in section 12.2. Once these intertemporal constant-dollar cost allocation factors f tn have been defined by equations (79), we can use equations (5) in section 12.2 to define the sequence of constant-dollar asset values,70 P t0, P t1 , P t2 , . . . , except that the period-t expected rate of asset price change i t in equations (5) is replaced by the CPI inflation rate t. If we then make use of equation (18), which expresses the nominal interest rate r t in terms of the 68. Of course, the practical problem that the national income accountant will face is how to estimate the future stream of incremental revenues. 69. This expectational assumption could be relaxed at the cost of more notational complexity. 70. Note that P t0 is equal to C t.
536
W. Erwin Diewert
real rate r ∗t and the CPI inflation rate t, so that 1 r t (1 t )(1 r ∗t ), then equations (5) simplify to the following equations: (80)
f t2 f t3 C t P t0 f t0 ... (1 r ∗t)2 (1 r ∗t)3 f t2 f t3 P t1 f t1 ... (1 r ∗t)2 (1 r ∗t)3 f t2 f t3 P t2 f t2 ... (1 r ∗t)2 (1 r ∗t)3
The sequence of constant-dollar “asset” values C t P t0, P t1, P t1, . . . shows how the period-t intangible investment can be written down over time in constant period-t dollars, and equations (10) and (12) in section 12.3 show how a sequence of constant-dollar depreciation rates tn for the intangible investment can be obtained from the sequence of constantdollar “asset” values, P tn .71 These depreciation rates tn can also be applied to the investment components Q tm to form estimated constant-dollar input stocks for the intangible investments.72 Thus, the assumptions made about the shape of the anticipated future period incremental revenues generated by the intangible investment,73 along with the matching of costs to revenues methodology, determine the pattern of depreciation that can be used to write down these costs associated with the intangible investment over time.74 The period-t beginning-of-period and end-of-period user cost charges, f t0 and u t0 respectively, for the intangible investment have the following forms: (81)
1 t f t0 P t0 t P t1 1r P t1 P t0 t 1 r∗ P t0 r ∗t D t0 , 1 r ∗t
71. If the assumptions on the anticipated (real) incremental revenues are such that the f nt decline at the geometric depreciation rate , then this rate will carry over to P tn; that is, we will have ptn (1 – ) nC t for n 0, 1, 2, . . . if f nt (1 – )n f 0t for n 1, 2, . . . 72. It is not necessary for the statistical agency to do this, but some users will be interested in the resulting M asset stocks that form capital stock aggregates of the Q tm . Normal index number theory can be used to aggregate these M stock components into an overall capital stock aggregate using the period-t flow prices P tm as price weights. 73. Thus the specific depreciation models presented in sections 12.7–12.10 can be adapted to the present context. 74. It should be noted that the obsolescence problems discussed in section 12.5 do not occur in the present context because the asset inflation rate and the CPI inflation rate coincide.
Measurement of Capital Services and Asset Price Changes
537
u t0 P t0(1 r t ) (1 t )P t1
(82)
(1 t)[P t0 r ∗t D t1 ] These two formulas show that the period-t “user costs” for the intangible investment do not consist solely of a depreciation charge, D t0 : there are also real interest rate charges that must be added to the depreciation term. It should be noted that the cost allocation model outlined above can be applied to other forms of “assets”; namely, deferred charges, prepaid expenses,75 and transfer fees when a reproducible asset is acquired. The onehoss-shay form of revenue matching is probably the preferred method for dealing with this type of “asset.” 12.13 Conclusion We have considered the problems involved in constructing price and quantity measures for both the capital stock and the flow of services yielded by the stock in an inflationary environment. In order to accomplish these tasks, the statistician will have to make decisions in a number of dimensions:
• What length of life L best describes the asset? • What form of depreciation or asset efficiency is appropriate? • What assumptions should be made about the reference interest rate and the treatment of anticipated asset price change? In this paper, we focused on the last two questions. We considered four classes of depreciation or efficiency and three types of assumption on the nominal interest rate r t and on the anticipated asset rate of price change, i t, giving twelve models in all. We evaluated these twelve models using aggregate Canadian data on two asset classes over the period 1926–99. We found that the assumptions on the form of depreciation or asset efficiency by age were less important than the assumptions made about the reference interest rate and the treatment of anticipated asset price changes.76 We consider the third question first. In order to answer this question, it is necessary to ask about the purpose for which the capital data will be used. For some purposes, it may be useful to use ex post asset price changes However, obsolescence problems can still occur when technical progress causes expectations about future incremental revenues to be revised downward. 75. Hatfield (1927, p. 16) gives several examples of this type of asset, including insurance payments that apply to multiple accounting periods, the stripping away of surface rock for a strip mine, and prepaid expenses. Hatfield (p. 18) notes that this type of asset is different from the usual sort of tangible asset since this type of asset cannot readily be converted into cash; i.e., it has no opportunity cost value. 76. Harper, Berndt and Wood (1989) also found that differing assumptions on r t and i t made a big difference empirically using U.S. data. However, they considered only geometric depreciation. Our paper can be viewed as an extension of their work to consider also variations in the form of depreciation.
538
W. Erwin Diewert
as anticipated price changes. For example, this approach may be useful in constructing estimates of taxable business income if capital gains are taxable. It may also be useful if we want to evaluate the ex post efficiency of a firm, industry, or economy. However, for most other uses, assuming that anticipated price changes are equal to actual ex post price changes is very unsatisfactory since it is unlikely that producers could anticipate all of the random noise that seems to be inherent in series of actual ex post asset price changes. Moreover, this approach generates tremendous volatility in user costs, and statistical agencies would face credibility questions if this approach were used. Thus, we restrict our attention to the choice between assuming a constant real interest rate or using smoothed ex post asset price changes as estimates of anticipated asset price changes. The assumption of constant real interest rates has a number of advantages:
• The resulting price and quantity series tend to be very smooth. • The estimates are reproducible; that is, any statistician given the same basic price and quantity data along with an assumed real interest rate will be able to come up with the same aggregate price and quantity measures. However, the use of smoothed ex post asset price changes as measures of anticipated asset price changes has some advantages as well:
• Longer-run trends in relative asset prices can be accommodated. • The anticipated obsolescence phenomenon can be captured. Each individual statistical agency will have to weigh the costs and benefits of the two approaches in order to decide which approach to use. I think that for most assets, it would be quite acceptable to use the constant real interest rate model, and this would maximize reproducibility. However, with assets that have experienced rapid technical progress, I would prefer to use the smoothed expectations model, since this model will better capture obsolescence effects. I would also use the smoothed expectations model for land, since over long periods land prices tend to appreciate faster than the general price level. We now discuss which of our four sets of assumptions on the form of depreciation or vintage asset efficiency decline is best. The one-hoss-shay model of efficiency decline, while seemingly a priori attractive, does not seem to work well empirically; that is, vintage depreciation rates tend to be much more accelerated than the rates implied by the one-hoss-shay model. If maintenance costs are linearly rising over time, a “gross” one-hoss-shay model gives rise to a linearly declining efficiency model,77 which of course is a model that exhibits very accelerated depreciation. 77. See Diewert (2004).
Measurement of Capital Services and Asset Price Changes
539
The straight-line depreciation model, while not as inconsistent with the data as the one-hoss-shay model, also does not generate the pattern of accelerated depreciation that seems to characterize many used asset markets. However, given the simplicity of this model (to explain to the public), it could be used by statistical agencies. The geometric depreciation model seems to be most consistent with the empirical studies on used assets of the four simple classes of model that we considered.78 Of course, geometric depreciation has the disadvantage that it will never exhaust the full value of the asset.79 Finally, a good alternative to the geometric depreciation model is the linear efficiency decline model. However, this model may have a pattern of “overaccelerated” depreciation relative to the geometric model. What is required is more empirical work so that the actual pattern of depreciation can be determined. In particular, statistical agencies need to consider establishing capital asset surveys, which would ask firms not only what assets they purchased during the reference period but also what assets they sold or scrapped during the reference period.80 We conclude by noting some limitations of the analysis presented in this paper:
• We have not dealt in great detail with the problems posed by unique as• • • •
• •
sets, although the model presented in the previous section could be used. We have not dealt with the problems posed by assets that depreciate by use rather than by age.81 We have neglected property taxes, income taxes, and insurance premiums as additional components of user costs. We have neglected the problems posed by indirect commodity taxes on investment goods; this complication can lead to differences between investment prices and asset stock prices. We have neglected many forms of capital in our empirical work including inventories, land, knowledge capital (except for our brief discussion in the previous section), resource stocks, and infrastructure capital. We have not discussed the many complexities involved in making quality adjustments for new types of capital. We have not discussed the problems posed by establishment deaths on
78. See Hulten and Wykoff (1981a,b) and Jorgenson (1996a). 79. Some statistical agencies solve this problem by “scrapping” the depreciated value of the asset when it reaches a certain age. This solves one problem but it introduces two additional problems: (a) the truncation age has to be decided upon and (b) the theoretical simplicity of the model is lost. 80. The survey should also ask for information on what the age and initial purchase prices of the sold or scrapped assets was. 81. Our reason for neglecting use is simple: usually, the national statistician will not have data on the use of machines available.
540
W. Erwin Diewert
asset lives and depreciation rates. We would expect asset lives to decrease during recessions, but we have not spelled out exactly how to adjust for this factor. However, we have provided a fairly comprehensive review of most of the issues surrounding the measurement of capital, including a method for forming intangible capital stocks.
References Babbage, C. 1835. On the economy of machinery and manufactures. 4th ed. London: Charles Knight. Beidelman, C. 1973. Valuation of used capital assets. Sarasota, FL: American Accounting Association. ———. 1976. Economic depreciation in a capital goods industry. National Tax Journal 29:379–90. Böhm-Bawerk, E. V. 1891. The positive theory of capital. Trans. W. Smart. New York: G. E. Stechert. (Orig. pub. 1888.) Canning, J. B. 1929. The economics of accountancy. New York: Ronald Press. Christensen, L. R., and D. W. Jorgenson. 1969. The measurement of U.S. real capital input, 1929–1967. Review of Income and Wealth 15:293–320. ———. 1973. Measuring the performance of the private sector of the U.S. economy, 1929–1969. In Measuring economic and social performance, ed. M. Moss, 233–351. New York: Columbia University Press. Church, A. H. 1901. The proper distribution of establishment charges, parts I, II, and III. The Engineering Magazine 21:508–17, 725–34, 904–12. ———. 1917. Manufacturing costs and accounts. New York: McGraw-Hill. Daniels, M. B. 1933. The valuation of fixed assets. The Accounting Review 8:302–16. Diewert, W. E. 1974. Intertemporal consumer theory and the demand for durables. Econometrica 42:497–516. ———. 1976. Exact and superlative index numbers. Journal of Econometrics 4: 115–45. ———. 1978. Superlative index numbers and consistency in aggregation. Econometrica 46:883–900. ———. 1980. Aggregation problems in the measurement of capital. In The measurement of capital, ed. D. Usher, 433–528. Chicago: University of Chicago Press. ———. 1992a. Fisher ideal output, input, and productivity indexes revisited. Journal of Productivity Analysis 3:211–48. ———. 1992b. The measurement of productivity. Bulletin of Economic Research 44:165–98. ———. 2004. Measuring capital. Discussion Paper 04-10. University of British Columbia, Vancouver, Department of Economics. Diewert, W. E., and K. J. Fox. 1999. Can measurement error explain the productivity paradox? Canadian Journal of Economics 32:251–80. Diewert, W. E., and D. A. Lawrence. 2000. Progress in measuring the price and quantity of capital. In Econometrics and the cost of capital: Essays in honor of Dale W. Jorgenson, ed. L. J. Lau, 273–326. Cambridge: MIT Press.
Measurement of Capital Services and Asset Price Changes
541
Edwards, E. O., and P. W. Bell. 1961. The theory and measurement of business income. Berkeley: University of California Press. Epstein, L. G. 1977. Essays on the economics of uncertainty. PhD diss., University of British Columbia. Fisher, I. 1896. Appreciation and interest. New York: Macmillan. ———. 1908. Are savings income? Publications of the American Economic Association, Third Series, 9:21–47. ———. 1922. The making of index numbers. London: Macmillan. Fox, Kevin J., and Ulrich Kohli. 1998. GDP growth, terms-of-trade effects, and total factor productivity. Journal of International Trade and Economic Development 7:87–110. Garcke, E., and J. M. Fells. 1893. Factory accounts: Their principles and practice. 4th ed. London: Crosby, Lockwood, and Son. (Orig. pub. 1887.) Griliches, Z. 1963. Capital stock in investment functions: Some problems of concept and measurement. In Measurement in economics, ed. C. Christ, 115–37. Stanford, CA: Stanford University Press. Hall, R. E. 1971. The measurement of quality change from vintage price data. In Price indexes and quality change, ed. Z. Griliches, 240–71. Cambridge, MA: Harvard University Press. Harper, M. J., E. R. Berndt, and D. O. Wood. 1989. Rates of return and capital aggregation using alternative rental prices. In Technology and capital formation, ed. D. W. Jorgenson and R. Landau, 331–72. Cambridge: MIT Press. Hatfield, H. R. 1927. Accounting: Its principles and problems. Lawrence, KS: Scholars Book Co. Hicks, J. R. 1939. Value and capital. Oxford: Clarendon Press. Hill, P. 1999. Capital stocks, capital services, and depreciation. Paper presented at the third meeting of the Canberra Group on Capital Stock Statistics. 8–10 November, Washington, DC. ———. 2000. Economic depreciation and the SNA. Paper presented at the 26th Conference of the International Association for Research on Income and Wealth. 27 August–2 September, Cracow, Poland. Hill, R. J., and T. P. Hill. 2003. Expectations, capital gains, and income. Economic Inquiry 41:607–19. Hotelling, H. 1925. A general mathematical theory of depreciation. Journal of the American Statistical Association 20:340–53. Hulten, C. R. 1990. The measurement of capital. In Fifty years of economic measurement, ed. E. R. Berndt and J. E. Triplett, 119–52. Chicago: University of Chicago Press. ———. 1996. Capital and wealth in the revised SNA. In The new system of national accounts, ed. J. W. Kendrick, 149–81. New York: Kluwer Academic. Hulten, C. R., and F. C. Wykoff. 1981a. The estimation of economic depreciation using vintage asset prices. Journal of Econometrics 15:367–96. ———. 1981b. The measurement of economic depreciation. In Depreciation, inflation, and the taxation of income from capital, ed. C. R. Hulten, 81–125. Washington, DC: Urban Institute Press. ———. 1996. Issues in the measurement of economic depreciation: Introductory remarks. Economic Inquiry 34:10–23. Jorgenson, D. W. 1989. Capital as a factor of production. In Technology and capital formation, ed. D. W. Jorgenson and R. Landau, 1–35. Cambridge: MIT Press. ———. 1996a. Empirical studies of depreciation. Economic Inquiry 34:24–42. ———. 1996b. Tax policy and the cost of capital. Vol. 2 of Investment. Cambridge: MIT Press.
542
W. Erwin Diewert
Jorgenson, D. W., and Z. Griliches. 1967. The explanation of productivity change. Review of Economic Studies 34:249–83. ———. 1972. Issues in growth accounting: A reply to Edward F. Denison. Survey of Current Business 52 (May): 65–94. Kohli, Ulrich. 1982. Production theory, technological change, and the demand for imports: Switzerland, 1948–1974. European Economic Review 18:369–86. Maddison, A. 1993. Standardized estimates of fixed capital stock: A six country comparison. Innovazione e materie prime (April): 1–29. Matheson, E. 1910. Depreciation of factories, mines, and industrial undertakings and their valuations. 4th ed. London: Spon. (Orig. pub. 1884.) Organization for Economic Cooperation and Development (OECD). 1993. Methods used by OECD countries to measure stocks of fixed capital. National Accounts Sources and Methods no. 2. Paris: OECD. Oulton, N. 1995. Depreciation, obsolescence, and the role of capital in growth accounting. Bulletin of Economic Research 47:21–33. Paton, W. A., and A. C. Littleton. 1940. An introduction to corporate accounting standards. Chicago: American Accounting Association. Pigou, A. C. 1924. The economics of welfare. 2nd ed. London: Macmillan. ———. 1935. Net income and capital depletion. The Economic Journal 45:235–41. Scott, M. 1995. Depreciation, obsolescence, and the role of capital in growth accounting: A reply to Oulton. Bulletin of Economic Research 47:35–38. Schmalenbach, E. 1959. Dynamic accounting. Trans. G. W. Murphy and K. S. Most. London: Gee and Company. (Orig. pub. 1919.) Schreyer, P. 2001. OECD productivity manual: A guide to the measurement of industry-level and aggregate productivity growth. Paris: OECD. Solomons, D. 1968. The historical development of costing. In Studies in cost analysis, 3–49. Homewood, IL: Richard D. Irwin. Triplett, J.E. 1996. Depreciation in production analysis and in income and wealth accounts: Resolution of an old debate. Economic Inquiry 34:93–115. Walras, L. 1954. Elements of pure economics. Trans. W. Jaffé. Homewood, IL: Richard D. Irwin. (Orig. pub. 1874.)
Comment
Frank C. Wykoff
Introduction Capital measurement continues to confound us. Official national income accountants from around the world are in 2004 trying to develop a consistent and coherent set of procedures for measuring capital accounts. This paper by Diewert is a major marker in progress toward this end. This was no doubt in Diewert’s thinking as he wrote the paper,1 and it should be read with this in mind, because portions are responses to specific analysts involved in the international accounting effort. Frank C. Wykoff is the Eldon Smith Professor of Economics at Pomona College. 1. Even though this paper is from a 2002 National Bureau of Economic Research Conference on Research in Income and Wealth conference, revisions were ongoing in 2004.
Measurement of Capital Services and Asset Price Changes
543
Roadmap to the Diewert Paper Diewert constructs specific steps that statisticians and national income accountants can follow in generating capital accounts. His model is based on capital theory accounting and on index number theory. He does not impose any one set of values, asset lives, depreciation patterns, discount rate, or expected inflation on accountants. Instead, Diewert illustrates, with Canadian data, the consequences of twelve different sets of parameter and form assumptions for each capital aggregate. Diewert assesses the relative importance of different assumptions by comparing his results. This will assist national income accountants to allocate scarce research resources. The paper has two parts. The first part develops an internally consistent model for measuring aggregate capital stock quantity and price indexes and indexes for flow measures of capital services (quantities) and user costs (shadow costs or service prices) from historical data on investment flows. Diewert provides a specific solution for the problem that accountants must solve in order to produce capital accounts, because data on capital aggregates are rarely available; only historical data on investment flows are. Diewert assumes that investment deflators are available so that investment expenditures can be decomposed into price and quantity components, and that these deflators and quantities are for “constant quality assets.” The assumption of a preexisting quality change correction is a strong one. The Diewert model seems to derive from the Hall and Jorgenson (1967), Hall (1968, 1971), Jorgenson (1974), and Hulten (1990) neoclassical model of capital.2 The Diewert model is different in some important respects, though. I will comment on the differences in some detail. Diewert provides a recipe for constructing capital aggregates. This contribution alone means that every national income accountant in the world should have this paper at their fingertips as they design their accounts. While the final capital accounting model that world national income accountants adopt may differ from the Diewert model, it should be close enough to make this paper a guidepost. Diewert explains the algebraic relationships between asset prices, user costs, real and nominal rates of return, depreciation, revaluation, and inflation required in a coherent economic model of capital. He shows exactly how to use these relationships to construct capital aggregates from investment flow data. He also uses the model to critique several alternative approaches to capital accounting. Second, Diewert applies his model to Canadian investment expenditures on two asset classes, machinery and equipment, and nonresidential 2. In this paper I am giving my interpretation of the HJH model. Hall, Jorgenson, and Hulten may have different understandings than mine, so any errors in presenting the model are mine, not theirs. I may be doing a disservice to Diewert and to Hall, Jorgenson, and Hulten pitting my view of HJH against Diewert. Still, I find this a useful deus ex machina.
544
W. Erwin Diewert
-model \ (r, )-model
(s) = 1 (s) = 1/L (s)=0 (s) =1/L Fig. 12C.1
je = j
r* = 4% je =
je = js
s≤L s≤L s s≤L
Diewart’s twelve sets of assumptions
structures, from 1926 to 1999. With these data he is able to construct capital aggregates for 1965 to 1999. He assesses the consequences for each capital aggregate of the twelve different sets of assumptions. The twelve assumptions are illustrated in the 4 3 array, figure 12C.1. Each row is one of four depreciation patterns—the one-hoss-shay efficiency function, linear depreciation pattern, geometric depreciation pattern, and linear-declining efficiency function.3 Each of the three columns is one of three expected asset-specific price changes and rates of return— perfect foresight, constant real rate of 4 percent, and an expected assetspecific price change equal to a smoothed ex post rate of asset-specific price change.4 The result is twelve pairs of different assumptions about two unknowns for which he produces each of the four capital aggregates. He then compares the different aggregate values and the stability over time of the aggregates. While both depreciation patterns and interest/expected inflation rates matter, differences associated with the latter are bigger. Finally, Diewert provides his own preferred pair of assumptions. The Hall-Jorgenson-Hulten Model To understand the Diewert model it helps to show how it differs from the Hall-Jorgenson-Hulten (HJH) model. Start with an age-time asset-price tableau in figure 12C.2. Each cell contains the acquisition price of an asset.5 The age of the asset is indexed from top to bottom starting with a new asset age 0, down to ages 1, 2, 3, ... . The year in which the asset is priced is indexed along the top left to right from year 1998 to 1999, 2000, 2001, ... . If we track the prices down a diagonal, we track the history or life experience of a particular asset. Down the principal diagonal we trace the life experience of a new, 1998 asset, q(0, 1998), q(1, 1999), q(2, 2000), ... . 3. The difference between depreciation patterns and efficiency function patterns is explained by Diewert and by Hulten (1990). See also Wykoff (2002). 4. Diewert assesses several other assumptions but ends up comparing these twelve. 5. While I use the term “asset” here, when applied to actual data, each cell of the tableau will contain the average price of a cohort (given age and time) of assets. The capital theory model is virtually never applied to data on a single asset. It is ordinarily used to decompose expenditures on a cohort of heterogeneous assets. This has important implications for assessing different depreciation patterns for empirical work.
Measurement of Capital Services and Asset Price Changes Age ↓
0
545
Year → 1998
1999
2000
2001
2002
q(0,1998)
q(0,1999)
q(0,2000)
q(0,2001)
q(0,2002)
1
q(1,1998)
q(1,1999)
q(1,2000)
q(1,2001)
2
q(2,1998)
q(2,1999)
q(2,2000)
q(2,2001)
3
q(3,1998)
4
q(4,1998)
Fig. 12C.2
q(3,2001) q(4,2002)
Age-year price tableau
While the phrase “acquisition price” allows us to abstract in principle from whether the asset is actually marketed or not, the price is the equivalent of an explicit purchase price in a competitive market that one would pay to acquire one unit of the asset. To allow for the fact that most assets are not marketed once they are put into productive use, we may refer to this as the “shadow” or “acquisition” price. In any case, the four-row 2002column cell contains the (shadow) market price of “purchasing” one unit of a four-year-old asset in 2002. The price tableau suggests one can disentangle forces operating on markets that cause changes in an asset’s price as it moves down a diagonal over the course of its life. In a world without any changes in technology, perhaps Egypt for 3,000 years, this may be a relatively easy task. Hall (1968, 1971), however, wanted to analyze the implications of technological change on asset prices. Clearly, today technological change has been occurring in a lot of asset classes, and these changes influence the prices in figure 12C.2, so learning about technological change and capital is clearly an important goal. A major insight of Hall’s (1968) analysis was to recognize an identification impossibility. Let s index age and let t index time, so that q(s, t) is the price of an age-s asset in year t. Hall defines vintage to be the difference between time and age: (1)
v t s.
Three types of effects influence the price of an asset throughout its history—vintage effects, age effects, and time effects. Equation (1) means that once we specify two of the three effects, the third is well defined by equation (1). In other words, 2 degrees of freedom do not allow identification of three unknowns. Hall’s Impossibility Theorem is that an analyst can only identify two of the three terms from a price tableau. This theorem implies an important econometric constraint. By definition, assume all forces influencing an asset’s price throughout its life fall into one of the three categories: age, vintage, and time. Any effect on price is either age specific, vintage specific, or time specific. Because
546
W. Erwin Diewert
these are subtle distinctions often muddled in the literature, we explore each effect below. Age Effects We want to compare older to newer assets while holding vintage and time fixed. Assume away all vintage effects and compare assets of different ages at the same time. For example, suppose the asset is a 1-liter Egyptian water pot whose design never changes. In any time period, pots of different ages coexist.6 Some are new, some are a year old, and some are 100 years old. Would all pots have the same price? If not, what, in a competitive market, would cause their prices to differ? Unless these pots are perpetuities, like the biblical widow’s cruse, they are eventually going to stop holding liquids. Their lives are finite. An old pot, having delivered services for a long time, is closer to the end of life, so a market will price it below a newer pot.7 The old one simply retains less productive capacity. Triplett (1996), Griliches (cited in Triplett 1996), and Feldstein and Rothschild (1974) call this effect “exhaustion.” It is also possible that the in-use efficiency of the asset, even a simple pot, declines with age and use during its life. It may slowly wear out and thin out, so that water in a ten-year-old pot may evaporate more quickly, or the pot may spring a hairline fracture that leaks a bit, or it may become discolored and less attractive. Decline of in-use productive efficiency, which means a decline in marginal product, is called “decay.” Note that decay and exhaustion are physical concepts. Obviously a lot of assets decay—old cars have lost some of their get-up-and-go, old computers become sluggish. When old, a machine may break down and need repairs—a pot might be patched up. This too is decay—the probability of breakdown rises, the likelihood of downtime increases, more frequent repairs are costly, and replacement parts become scarcer. Because of exhaustion and physical decay, old machines have lower prices than new machines. This price effect is called “deterioration.” Vintage Effects Vintage effects cannot be illustrated with identical Egyptian pots. Consider instead French wines. Each specific year that a new wine is extracted from French grapes is a “vintage.” Vintage does not just refer to year t but to a specific date—2001. People do not pay vastly different prices for fiveversus six-year-old wines, but they do pay very different prices for superb 6. I use discrete time in this analysis. Hall (1968) worked in continuous time. Discrete time avoids the calculus but can result in (relatively uninteresting) complexities like beginning or end of the period and lumpy versus marginal changes. 7. This effect obviously is related to intensity of use. Rental agency cars are used up a lot faster than the car driven by the little old lady in Pasadena. The model can easily allow for this. In fact, age is really only a proxy here for intensity of use. Applying this distinction to data is harder.
Measurement of Capital Services and Asset Price Changes
547
versus poor vintages. It is not that the wine is sold in a particular year. It is the specific vintage when the grapes ripened on the vines that matters to wine connoisseurs.8 The key is that each vintage is inherently unique— quality is embodied in the vintage wine. Furthermore, the quality of a superior vintage cannot be added on, embedded in, or retrofitted to a poorer vintage. Like wine, different vintages of machine can be different, and these differences may be embodied in the new machine and not available to the old. I define vintage effects so that they can only occur if technological change is embodied in new capital. There are several tricky points in transferring the vintage concept from wines to machines. First, a vintage effect is different from a passage-of-time effect. A vintage effect is a difference between two assets; a time effect is a change in assets between two periods. While for wines vintage differences occur coincidentally with changes in time, this is not true for machines. In any given year or date, different kinds of a particular machine can be produced at the same time. This is not possible with wines, because the vintners define all wines produced in year v to be vintage-v wine. With capital, however, some new machines may embody new technologies and some may not. Automobile producers, for instance, gradually adopt new technologies: automatic shift, power steering, air conditioning, safety air bags, radial tires, and navigation systems are all technological innovations that were at first embodied in a few top-of-the-line models of new cars. Eventually, successful innovations become standard equipment embodied in almost all cars. Consequently, while this is not true of wines, machines are often produced in any given year with a variety of the new technologies embodied in them— these are different “models.” For this reason, the very concept of “vintage” capital becomes ambiguous compared to vintage wine. Most stocks of new assets are heterogeneous, creating conceptual and practical measurement difficulties, but heterogeneity also offers research opportunities. Heterogeneity complicates the model in which a well-defined asset is being priced in the age-time tableau. How much and what types of heterogeneity do we allow in such a tableau? At first one might restrict the analysis to a homogeneous good, like the Egyptian pot or a single 75-watt light bulb, but this rules out technological change and vintage effects de facto. In theory we can imagine investment flows reflecting acquisition of only new vintages each year, but in applying the model to data we will have to contend with the reality that new assets in each year are heterogeneous.9 Do we apply the HJH model only to one simple 60-watt light bulb? Do 8. Exactly why this is true is controversial. Wine judges talk about mystical forces, like a vintner’s wife having a new baby or lightning striking some oak trees, but different quality vintages more likely reflect weather conditions, especially timely rainfall and sunlight. See Ashenfelter, Ashman, and LaLonde (1995). 9. Heterogeneity of new production raises another difficult issue—heterogeneity of consumers and sorting could create serious identification problems with hedonic methods. This problem challenges the representative consumer model.
548
W. Erwin Diewert
we apply the model to only four-door sedan Chevrolets with standard features that never change? Do we apply the model to all laptop computers? When a new innovation comes on the scene, do we treat it as a new vintage even if many new assets do not embody the new innovation? More research is needed to answer these questions, and some problems only become evident when one applies the model to data. I would make several points, though. First, unless we allow for heterogeneity of new assets when technological change occurs, we cannot, as Hall has shown, ever hope to identify vintage, age, and time effects. For example, price hedonic techniques, a widely used procedure in this literature, can only help us dig deeper into identification if new assets at a point in time are heterogeneous.10 Second, we must have heterogeneity in a price tableau if we want to estimate the consequences of embodied technological change. Third, once we allow for differences in technology embedded in different assets, we encounter the problem that our tableau is somehow defined for an arbitrarily defined set of assets. A closely related problem here is the definition of one unit of the asset. Consider the simplest example of a quality change—what does it mean to say we are pricing one box of crackers if a new box contains twice the crackers of an old box? How do we define one computer if a new one has twice the capacity along every dimension? One way out of this conundrum would be to assume that assets, even with different technologies, are the same good if they are producing the same output, as long as we adjust for changes in productive capacity. We can identify asset quantities in equivalency units based on the output measure of each machine. A box with twice the crackers is equivalent to two of the smaller boxes. Even here we encounter problems, because some changes embodied in new goods permit qualitatively different kinds of output. For example, a four-wheel-drive SUV is still an SUV, but it can drive in off-road places where a two-wheel-drive SUV cannot. Furthermore, an SUV is still a car, even if it differs in some dimensions from other cars. A cell phone embodied with a computer or digital camera is still a cell phone, but it also allows one to take and view pictures. In principle the HJH model provides a coherent method for specifying units in terms of “vintageequivalents.” Still, the model does not clearly draw the line between old 10. Hedonics encounters another problem from heterogeneity and identification of new technologies or “better” vintages. Faster processors in computers ordinarily are better processors, and faster processors are developed over time. For complex assets, though, some features are traded off in design. For example, heavier cars are safer cars, implying that weight increases improve quality. Lighter cars are more fuel efficient, lowering operating costs. This implies that weight reduces quality. This creates ambiguity in assessing the contribution of weight to “vintage” changes among periods. The example of changing prices of gasoline influencing the vintage effects raises complicated questions about capital aggregation. Many characteristics may be traded off, so that a change over time may be an embodied technological improvement in one period and yet be a lower quality effect in another.
Measurement of Capital Services and Asset Price Changes
549
goods (cell phones and cameras) and an entirely new good (a cell phone with camera) or quality change of the old good (a cell phone with camera). The idea of vintage machines here is that if two assets that are the same age in the same time period can embody different technologies, then they are different vintages. We could think of new, 2004 computers, with different RAM, different processors, different pixels per square inch, and so on. If we insist on asserting that all of these are just different assets instead of different vintages of the same asset, then we cannot proceed to analyze vintage effects for a class of assets. Vintages of processors are well defined—a Pentium I, II, III, and IV, for example. Vintages of RAM may be harder to specify, but the idea of “more” is still there. Acknowledging practical ambiguities and problems, the key to vintage effects in this model is that new technology is embodied in a new vintage that remains unavailable to older vintages. For convenience I am going to assume that new vintages are superior to old. (This is obviously wrong for wines and may be wrong for some assets as well. For example, suppose critical components or resources, like black walnut, become unavailable. New furniture may not be as good as old.)11 What are the consequences of technological innovations in superior vintages for other new assets that do not have the superior features? The older vintages, while not aged by definition and with unchanged marginal products, will in a competitive market suffer a price decline. In the HJH model this price decline is “obsolescence.” This definition of obsolescence depends only on vintage differences and does not result, strictly speaking, from the passage of time. I will explain that Diewert approaches many of these problems differently from HJH. Time Effects Now consider time effects. Time effects are all effects that occur between periods that do not reflect aging and are not specific to vintage. Following HJH, time effects result in “capital gains (losses).” One might ask, why isolate vintage from time effects? After all, as a general rule, new vintages of capital are adopted over time. Isn’t isolation of vintage effects arbitrary? The answer has to do with the purpose for these models in the first place. An important issue in economics concerns specification of the capital input in production and the effects of technological change on the production function. For example, Solow (1970) modeled the vintage production function to show how growth models could account for embodied technological change. Isolation of vintage effects in this model and identifying “obsolescence” as resulting from vintage effects allows us to study the consequences of embodied technological change. In any case, time change generates many effects besides aging and vin11. See also caveats in notes 8 and 9.
550
W. Erwin Diewert
tage. Excluding age and vintage effects, it seems immediately clear that there are two very distinct types of price changes that occur between periods. First is a change in the value of the numeraire (the unit of account— the dollar, yen, or euro). This effect is “inflation.” By definition, inflation is a change in the unit of account that leaves relative prices unchanged. Most economists would acknowledge that the classical dichotomy— changes in output and other real variables are independent of the price level and other nominal variables—applies in the long run. Thus, defining velocity, V, as PQ/M where P is the price level, Q is real output, and M is the quantity of money, most economists would agree that in the long run (2)
dM dQ,
where dM is the time rate of change in the nominal money supply, is a constant (reflecting secular trend changes in velocity), is the rate of inflation, and dQ is the change in output. Holding Q constant, then, equation (2) implies (3)
dm dM .
The change in the real money supply equals the rate of change in velocity. If velocity is a constant, then in the static long run, no change in output, inflation equals the growth rate of the money supply so that the real money supply is constant. This model implies that the sole cause of inflation is growth in the nominal money supply. Even if this is not the case, inflation is a monetary phenomenon, unrelated intellectually to relative changes in prices. As such, inflation is a distinct phenomenon. The second set of forces in the capital gains (losses) term is relative price changes that reflect changes in either demand for or supply of the capital good, excluding vintage and age effects. Capital goods are inputs, and so their demands are derived demands. For example, suppose a new substitute for gasoline comes on line. Oil tankers are a capital input to gasoline production, and oil tanker prices, ceteris paribus, will fall. The vintage of the oil tanker is not relevant, unless a new vintage embodies unique technology that accommodates the new energy substitute. All oil tankers will fall in price. This effect, in the absence of embodied technological differences, would leave asset prices relative to age or vintage unchanged. Changes in prices or technology of complements can have complicated effects over time on capital goods. Consider a laptop computer. Software is a complement in production to a laptop computer. Suppose new software is introduced for which old laptops are ill suited. This is a change in technology, not of the computer but of the software complement to the computer. Such a change in HJH results in obsolescence of earlier vintages of software, because the change is embodied in the new software. If no change has been embodied in the new computer, then the new software will
Measurement of Capital Services and Asset Price Changes
551
have the same effect on new and old capital. It will not, however, make the old asset obsolete relative to the new vintage. For new software to render an old computer worth less relative to a new one—that is, to have nonproportional effects on different vintages of capital—the new software must be accompanied by some innovation embodied in the new computer.12 If new computers are different, then this effect is captured by our definition of obsolescence, because it is vintage specific. If new technology is not embodied in a new asset, demand for the old asset relative to the new asset will not change its relative price.13 This means that software innovations are not changes in computer depreciation per se. We have now defined age, vintage, and time effects on productive capacity of assets. We define deterioration, obsolescence, and capital gain (loss) as the price changes that result from each of these respective physical effects. We are now prepared to see, using figure 12C.2, how these combine to influence the price during the life of an asset. In figure 12C.2 in one year the price of an asset moves down the diagonal one cell. The total price change is (4)
q(s, t) [q(s 1, t 1) q(s, t)].
The figure suggests we can decompose this total price change into two steps: (a) a vertical move down one column, (q[s, t] – q[s 1, t]), and (b) a horizontal move across one row from left to right, (q[s 1, t 1] – q[s 1, t]). “Depreciation,” as defined in the HJH model, is the vertical move down a column of an age-time tableau. “Revaluation” is the horizontal move from left to right across a row. The total price change is depreciation minus revaluation: (5)
q(s, t) [q(s, t) q(s 1, t)] [q(s 1, t 1) q(s 1, t)].
Decomposing the diagonal move into two steps, depreciation and revaluation, is not enough to identify three effects, age, vintage, and time. Depreciation is (6)
q s
t0
[q(s 1, t) q(s, t)].
Equation (6) advances the age index while holding the time index constant, so by equation (1) the vintage index is also advanced. The price 12. Feenstra and Knittel (2004) develop an innovative model in which expectations can cause consumers to purchase more computer capacity than they need in anticipation of future software developments. Analyzing synergies among software, hardware, and expectations by Feenstra and Knittel suggests an important area of new research. 13. I argue below that this distinction has important implications for the definition of depreciation.
552
W. Erwin Diewert
change we track down the vertical involves a one-year-old asset that is a one-year-earlier vintage. Depreciation is the sum of deterioration resulting from age (exhaustion and decay) plus obsolescence resulting from vintage effects (embodied technological change). Revaluation is (7)
q t
s0
[q(s 1, t 1) q(s 1, t)].
By this definition, revaluation advances the time index and not the age index, so by equation (1) the price change we track from left to right is to an asset one year later and a one-year-newer vintage. Revaluation is the sum of obsolescence (embodied technological change) plus capital gain or loss (inflation and shifts in supply and demand). Both depreciation and revaluation are polluted by vintage effects.14 Analysis of the Diewert Model The HJH model helps to clarify the Diewert model. First, the prices in figure 12C.2 are not the prices Diewert uses. Figure 12C.2 prices are market “shadow” prices, without adjustments, of one unit of an asset in the relevant cell. Diewert assumes that the prices in the tableau have already been modified by statisticians to remove quality differences, so a Diewert cell would contain prices of a “constant-quality” asset. A statistical agency is assumed to have already solved the problem of extracting price changes due to quality change from market prices to produce a series of prices (or a price index) of a constant-quality capital good. For example, the Diewert tableau prices Egyptian pots or computers that, like Egyptian pots, are identical. This strikes me as a strong assumption for a model intended to estimate depreciation. In particular, if capital goods’ prices are corrected for quality change, then vintage effects are assumed away, but it seems to me that vintage effects have to drop from the analysis and are not known a priori. Nonetheless, making such corrections to investment flow data is common practice in statistical agencies. Diewert, who works with statistical agencies worldwide, is making an assumption that his principal target audience, statistical agency economists, finds natural. The advantage to the assumption is that Diewert can later define an explicit term, i ∗, for changes in asset prices between periods to capture effects that drive used asset prices down. Diewert calls this effect “obsolescence.” This differs from my definition of obsolescence as a pure vintage effect. 14. Hall (1971) shows how to use hedonics to isolate obsolescence. I apply Hall’s model in Wykoff (2002) to show how to extract the obsolescence component of depreciation from the revaluation term.
Measurement of Capital Services and Asset Price Changes
553
Next Diewert defines two possible concepts of depreciation. Neither one is the definition from the HJH model. Diewert defines “cross-section depreciation” to be the changes in prices moving down a vertical in a Diewert price tableau. Because he has assumed away vintage effects, his “crosssection depreciation” corresponds to “deterioration” in the HJH model. Recall that deterioration was the decline in price from aging, holding both the vintage and time indexes constant, leaving two forces that change asset quantities: exhaustion and in-use decay.15 Diewert defines “time series depreciation” to be movements along the diagonal in his quality-corrected price tableau.16 It turns out that Diewert pits his two definitions of depreciation against one another, arguing that time series depreciation is vitiated by two types of time effects—inflation and “asset-specific price changes.” He points out that one must strip inflation out of time series depreciation. Diewert also distinguishes ex ante from ex post time series depreciation. While this at first struck me as an unnecessary complication, it turns out that Diewert makes it because some accountants, using time series depreciation, think only ex ante asset-specific price changes should be included with deterioration in capital measures. This is why it is important to realize that this paper is addressing national and international agency accountants at this time. Ultimately his position on these two concepts of depreciation is that the correct concept should consist of “cross-section depreciation” plus ex ante asset-specific price increases, corrected for inflation expectations, that imply a fall in used asset prices relative to new ones. He calls this assetspecific intertemporal effect excluding deterioration “obsolescence.” Diewert argues that his concept of obsolescence is broader than the HJH concept, because i ∗ allows for asset-specific, real price effects between periods in addition to vintage effects. That is, Diewert argues that his concept of obsolescence is broader than price changes caused by capital-embodied technological changes. In the language of the HJH model, Diewert defines depreciation to allow for deterioration, obsolescence, and inflationcorrected capital gain (loss). Recall that capital gain (loss) consists of real time effects on asset-specific prices apart from vintage effects. 15. In contrast to Diewert here, Dunn et al. (2004) in estimating depreciation of personal computers rule out in-use decay a priori, so if they were to use quality-adjusted prices their depreciation would only allow for exhaustion. They report very large estimated obsolescence effects. 16. Things tend to get very cloudy here, in part because Diewert is so thorough, perhaps to a fault. He allows in his model for interest rates to vary, for differences between beginningand end-of-period user costs, nominal asset-specific inflation, and real asset-specific inflation. He also allows for ex ante and ex post time series depreciation. He may know that some readers need these concepts spelled out, but I find they clutter the analysis. Diewert can work through the thicket himself, but I think some simplification by ignoring a few issues like beginning- and end-of-period user costs, varying interest rates with term structures, and so forth would clarify the exposition. However, in the context of the debate in 2004 among world national income accountants, Diewert obviously felt these details were necessary.
554
W. Erwin Diewert
Evidently, Diewert’s distinction between ex ante and ex post obsolescence is the difference between expected and realized inflation. If one is going to distinguish ex ante from ex post inflation, why not distinguish ex ante from ex post technological change?17 I am not sure depreciation deductions should cover real asset-specific time effects. I think depreciation should consist only of deterioration plus obsolescence as defined in HJH. The reason for this is subtle but worth discussion. In addition to deterioration, only vintage-specific changes cause prices of old assets to fall relative to new ones. All other time changes in supply of or demand for assets will influence new and old assets proportionally and leave depreciation rates unchanged. This point is arguable. My view is this: capital goods are inputs, and thus their demands are derived from production processes. A shock to the market for a capital input only alters the relative prices of different versions of the inputs, if different vintages embody different technologies. A new software “killer application” changes old relative to new asset prices (depreciation) only if the new computer embodies a technological innovation that can exploit the killer application.18 This capital-embodied innovation is a pure vintage effect captured by the “narrow” definition of obsolescence. There is one more important difference between Diewert and HJH. The conversion of historical investment flows into capital aggregates is usually thought of as a problem of choosing weights, i , in the following equation: iL
(8)
K t ∑ i Iti . i1
K is capital, I investment corrected for inflation, and L the life of the oldest asset. The HJH analysis produces estimates of weights, i , of past investment flows in determining the period-t capital aggregates like equation (8) for the stock of capital. The quantity weights are relative productive capacities (estimated from relative asset prices based on duality) in measuring the capital stock and relative efficiencies (estimated from relative user costs based on duality) for capital services. Diewert says that equation (8) arbitrarily imposes linear aggregation. He assumes, after constructing individual price and quantity indexes, that he still has the problem of decomposing capital aggregates into price and quantity components. He assumes that nominal capital aggregates in each period—say, Nt for the nominal capital stock—are some function, or index, of his price and quantity indexes, so that 17. Admittedly, this would be much more complex and require more than an inflationforecasting model. It would require a formal expectations-generating model, such as rational expectations, of both economics and technological change. 18. The working paper by White et al. (2004) is an excellent illustration of the complexities involved in constructing price indexes from hedonics when synergies are occurring between capital innovations, software innovations, and marketing innovations.
Measurement of Capital Services and Asset Price Changes
(9)
555
Nt P[ p1, p2, . . . , pL ]Q[q1, q2, . . . , qL ].
Diewert uses index number theory to construct aggregate price, quantity, user costs, and services indexes. Diewert appears to be basing this analysis of designing capital accounts in part on utility theory. This may help explain why his model differs from HJH, which is based on neoclassical capital theory. Conclusion Diewert has designed a solution to an important problem now confronting world national income accountants—how to construct capital accounts from historical investment flow data. While unique in some respects, Diewert’s model is built on solid economic theory and provides a defensible solution to virtually all practical problems the accountants will encounter. Diewert has identified the information required to construct capital accounts. Some aspects of the model will be contentious; nonetheless, this is a major advance in clarifying the issues, providing a guide, and setting a future research agenda. References Ashenfelter, Orley, David Ashman, and Robert LaLonde. 1995. Wine vintage quality and the weather. Chance 8 (4): 7–14. Dunn, Wendy E., Mark E. Domes, Stephen D. Oliner, and Dan E. Sichel. 2004. How fast do personal computers depreciate? Concepts and new estimates. NBER Working Paper no. 10521. Cambridge, MA: National Bureau of Economic Research. Feenstra, Robert C., and Christopher R. Knittel. 2004. Re-assessing the U.S. quality adjustment to computer prices: The role of durability and changing software. Paper presented at CRIW Conference on Index Theory and Measurement of Price and Productivity. 28–29 June, Vancouver, B.C. Feldstein, Martin, and Michael Rothschild. 1974. Towards an economic theory of replacement investment. Econometrica 42 (May): 393–423. Hall, Robert E. 1968. Technical change and capital from the point of view of the dual. Review of Economics and Statistics 35 (January): 35–46. ———. 1971. The measurement of quality change from vintage price data. In Price indexes and quality change, ed. Zvi Griliches, 240–71. Cambridge, MA: Harvard University Press. Hall, Robert E., and Dale W. Jorgenson. 1967. Tax policy and investment behavior. American Economic Review 57 (3): 391–414. Hulten, Charles R. 1990. The measurement of capital. In Fifty years of economic measurement: The jubilee of the Conference on Research in Income and Wealth, ed. Ernst R. Berndt and Jack E. Triplett, 119–58. Chicago: University of Chicago Press. Jorgenson, Dale W. 1974. The economic theory of replacement and depreciation. In Econometrics and economic theory, ed. W. Sellekraets, 189–221. New York: MacMillan. Solow, Robert A. 1970. Growth theory: An exposition. Oxford: Clarendon.
556
W. Erwin Diewert
Triplett, Jack. 1996. Depreciation in production analysis and economic accounts. Economic Inquiry 31 (1): 93–115. White, Alan G., Jaison R. Abel, Ernst R. Berndt, and Cory W. Monroe. 2004. Hedonic price indexes for personal computer operating systems and productivity suites. NBER Working Paper no. 10427. Cambridge, MA: National Bureau of Economic Research. Wykoff, Frank C. 2002. Obsolescence in economic depreciation from the point of view of the revaluation term. Working paper for National Bureau of Economic Research and Conference on Research in Income and Wealth productivity workshop. 28–30 June, Cambridge, Massachusetts.
Remarks Robert E. Hall
After reading and hearing the papers at this conference, I’d like to take the opportunity to summarize my own positions on some of the key issues of the conference set forth in my own papers. I was gratified to see that these papers have had some impact on the work of this conference, but there has been remarkable confusion about what paper said what. So let me just spend a few minutes on this topic, because all these papers, I believe, are completely central to the topics addressed at this conference. The first paper, “The Stock Market and Capital Accumulation,” came out in the American Economic Review (AER) in December 2001. That paper infers the total amount of capital, tangible and intangible, in the economy—specifically in the nonfinancial corporate sector of the U.S. economy—from the value of the securities that are claims on that capital. An important factor in that paper is the recognition that one of the reasons that the market values capital at prices above or below its creation cost is scarcity arising from adjustment costs. A standard view of the stock market that I inherited in this research was that—because short-run supply curves are upper sloping—an expansion of the economy will generate rents that show up in the stock market. This view, which we attribute very closely to James Tobin, has also been run in reverse in the q theory to infer adjustment costs from values in the stock market. I approached it in a different way, which was to try to remove the rents associated with adjustment and then infer from that how much Robert E. Hall is the Robert and Carole McNeil Joint Senior Fellow at the Hoover Institution, professor of economics, Stanford University, and a research associate of the National Bureau of Economic Research.
557
558
Robert E. Hall
capital there was after taking care of adjustment costs. So that’s the basic idea, the new idea in that paper. What it shows is that scarcity rents are a fairly small factor in the movements of the stock market, with what I would take to be realistic adjustment costs. The conclusion, then, is that the amount of capital in the economy is very different from the amount that you get by accumulating observed investment. In fact, the correlation is, I believe, a little bit negative. Central to the subject of this conference is that the amount of capital that that calculation reveals is about double the recorded amount of capital. In other words, we have as much intangible capital as we have tangible capital. Now, people disagree, and of course it depends on what the stock market is doing the day you do that calculation, but basically there isn’t much doubt that there’s a big excess. One thing I want to stress in this thinking is to remember that if you did the same calculation in 1980, the apparent amount of capital inferred from securities markets was about half the amount of capital that you got by accumulating past investment. So this process works in both ways, and of course the q theory has always stressed this. Values of q are found to be between one-half and two. q is close to a random walk, but fluctuating within that range, at least over the times we’ve looked at it. So that was one exercise. Then I wrote a paper that is a little bit of a lark, especially in retrospect, which was published in the Brookings Papers on Economic Activity in late 2000. That paper was written during the height of the dot-com boom and was entitled “e-Capital.” It’s basically a spin-off from the first paper, asking the following question: Suppose that you took seriously the idea that there is something besides physical capital, that the valuable stuff that corporations own goes beyond the physical capital and includes a lot of valuable intangibles. And let’s call those intangibles another factor of production. Then the question is this: can you build a coherent story within the entire body of aggregate analysis that macroeconomists bring to this kind of a subject? In particular, can you make sense out of the labor market of the 1990s by invoking another factor of production that we didn’t know we had before until we looked in the securities markets? And the answer is unambiguously yes: you get a completely coherent story about, for example, why the wages of college-educated workers, whose skills are complementary with e-capital, have risen relative to those who haven’t been to college. There’s been a dramatic difference between college graduates and others over that period. The underlying idea was to take the intangible number from the AER paper and push it to see if it made sense from the point of production economics. The third in the set of published papers is my Ely lecture. Many people seem to cite the Ely lecture when they actually mean the AER paper, so I would appeal to you to straighten that out, because the Ely lecture is actu-
Remarks
559
ally on a fairly different topic. There’s very little overlap. Its title is “Struggling to Understand the Stock Market.” It does one thing that is routine that I won’t dwell on. It makes the case from financial economics that we really have no reason to condemn the stock market as irrational. If it were irrational, fortunes would be made by arbitraging the irrationality. And there’s absolutely no evidence that such fortunes are being made. In fact, there are so many spectacular examples when fortunes were lost in that enterprise. So let’s just take as given that the stock market is rational. Then it raises the question: can you make sense in a rational valuation framework of the high volatility of the stock market? And the answer that’s given there is yes because the stock market values prospective earnings, and prospective earnings are highly volatile. That has led into ongoing research of mine to try to understand the volatility of earnings. I would mention here that I think the biggest unresolved question in intangibles is the third leg of the intangible story. The intangible story has basically three legs. One is that we have data on the formation of intangibles, and there’s a lot of interesting material in papers in this conference on that point, which I think is terrific. Then we have the work that I’ve done and which Erik Brynjolfsson and other people have done that uses security markets to try to get the value of intangibles. The third leg, and the one that I’ve been working on recently—and that I think in some ways is the toughest—is that intangibles should have earnings, right? We’re putting all this money in intangibles so they need to provide a return at some point. And tracking the return of intangibles right now is the biggest mystery. From the rational valuation perspective, a lot of that growth could be in the future. You could tell a completely coherent story about individual firms and about the stock market as a whole in which the big payoff lies in the future. And the growth optimists, like Dale Jorgenson, say yes, that’s going to happen. The economy’s going to grow a lot, and that means the earnings will grow a lot and it’s all going to work out. And we can understand why the stock market is at an all-time high today relative to earnings. It all makes sense, but it all lies in the future. The one thing that has not happened is the tremendous shortfall—for example, between the cumulative earnings of corporations in the postwar period today and the return to the stock market over the same period. Three, four hundred basis points’ difference. Where is it? It’s all in appreciation of the stock market that’s occurred, apparently in anticipation of earnings that will occur in the next decade or two. So if you’re following the numbers here, you’re holding your breath because if it doesn’t come true, then the stock market isn’t going to remain at anything like its current level relative to earnings. I’ve already hinted at some of the stuff that I’m working on now. There are a couple of papers that are available on my Stanford web site. I’ve built
560
Robert E. Hall
what I call the “industry dynamics model,” which looks at all these issues in several dozen industries and in particular goes back to the adjustment cost topic and finds adjustment costs, using a new identification scheme, that are very similar to those that I got from Matthew Shapiro’s work. I see it as a strong endorsement of Matthew’s Ph.D. dissertation. I find that rent from adjustment costs isn’t the big part of the story. Of course, that makes the mystery of intangibles even more interesting. I think it says that most of the thinking that you see in this conference is on the right track. The q theory is not what we see. We see intangibles, or something like them. Let me just say something about the resulting view that I hold of the issues in this conference. First and foremost, the rise in the stock market invites the interpretation that lots of intangibles were formed in the 1990s. Furthermore, as many papers in this conference show, there’s lots of other evidence. There’s confirmatory evidence everywhere you look. If you look at individual industries, if you look at various types of intangible formation, if you look in particular at the role of information technology, all those things seem to converge on the notion that something really important happened in the 1990s and, in particular, some real heroes in terms of individual firms that we have talked about a lot. Companies such as Dell and Wal-Mart really figured out how to take advantage of information technology and formed a huge amount of intangibles. Wal-Mart is valued at six times the cumulative value of its investment. So what does that mean? It’s an intangible engine—a remarkable organization. I would say that, if you do the calculations in my AER paper, it’s only borderline plausible that the volume of intangibles revealed in the market is simply the cumulative amount of investment that actually occurred. In other words, are the intangibles there just because they were formed, or have there been capital gains in intangibles? Are they worth more than their investment cost? I wouldn’t rule out the capital gains hypothesis. As I said in the AER paper, it’s a stretch to get enough invisible investment in the 1990s to cumulate to the kind of valuation for intangibles that I get in that paper. So I would certainly consider that the great heroes—Wal-Mart, Dell, eBay, the companies that have built business models in a completely new framework—have actually earned substantial capital gains. I believe that Dell has earned a capital gain on its business process, which is totally different from the way companies were ever run before. That releases you from trying to say that it’s only worth the investment. It says that ideas can command values higher than the cost of creating the ideas. My own view is that finding the investment cost is really important, and I’m happy to see that much of the research we’re talking about in this conference has to do with that. Measuring capital gains is another thing. The thing I would demote, although I know there are people here today who
Remarks
561
will make a spirited case in the opposite direction, is that it was irrational markets that (even today) were valuing things that are just never going to happen. Now, of course, it’s easy to go through valuations in February 2000 and say, “Boy, did we get it wrong.” Yahoo was worth $130 billion in February of 2000 and it’s worth, what, $5 billion today? So we got it really wrong on Yahoo. (As of the revision of these remarks, Yahoo had rebounded to $48 billion.) Now, there are many people who would say, “Well, we were just crazy. You know, we went through a period of craziness.” But I don’t buy that—partly because I was there and it did seem like there was something special going on. I’m not sure I ever believed it was worth $130 billion, but I thought that I respected those market values. And some of the companies that I found out about and thought had something important going on, meriting a high value, like eBay, have sustained those values. eBay is worth today almost what it was in February 2000. It’s an amazing accomplishment. It’s not that the whole dot-com idea was crazy and was a wave of irrationality; it’s that we overstated the earnings potential of some models. Others, like eBay, had at all times a solid revenue model—it was always a joke at breakfast when some new dot-com came out and we’d say, “What’s the revenue model?” Sure enough, Yahoo’s lack of a revenue model as opposed to eBay’s solid revenue model actually worked out. So that turned out to be the right question. So I would say I’m against the idea that it’s all irrational and eventually it will all disappear. I am in favor of balancing between the capital gains view and the capital formation view, and I respect both of those views. I remind you again though, before you get completely carried away with trying to understand where all the extra value comes from today, or in the ’90s, do remember that we have to explain what appear to be negative intangibles, a shortfall in capital market value, that occurred in the ’70s and ’80s. So I would be very cautionary on that point. I’ve mentioned that it’s important to look at the industry- and companylevel data and findings like the Brynjolfsson multiplier of 15 that tell you that there’s something important that you can learn from looking at these issues at a more micro level. There’s a lot of that in this conference, which I strongly applaud. I would like to be able to say that the line of research of my own that I’ve summarized here is completely self-contained and completely absent from contradictions. It may or may not be right, but it all fits together, it’s all coherent, and it explains a lot. That’s not quite true. Interestingly, the one fact that I can’t explain is what appears to be a dramatic overvaluation of debt. It’s impossible to explain in a view like mine. We’ve had this highly productive capital over the last decade and a half, and it’s impossible to square that with what had been very low long-term interest rates over the same period. There’s been an anomaly of overvaluation of
562
Baruch Lev
debt, which of course is an old story. But I just want to remind you that there is an issue here. I think it’s fair to say—and certainly the Brynjolfsson and Jorgenson views of the world that I have followed and that I think I buy say it—that the two great general-purpose technologies that have emerged in the last decade are the server, the completely generic, $1,500 computer with almost unlimited power, and database software. Interestingly, database software gets much less coverage in discussions of this. Certainly, for example, Jorgenson’s story is all silicon and not software at all. I would like to bring software into that story. The database is what makes eBay possible. The database is what makes it possible to create overnight a business model that does all kinds of amazing things without having to do a lot of custom programming. If you look at what American Airlines went through to build the first airline reservation system, which took years of development and hundreds of millions of dollars in the 1960s, you could build a similar system in a few weeks today running on a database because it’s all generic. It’s all generalpurpose technology. What’s the other $14 of the Brynjolfsson multiplier? A lot of it is database. A lot of it’s other things, too, but I just wanted to call your attention to that fact. I’ve benefited a great deal from looking at the papers of this conference. I think this is really an exciting area, and I think bringing together these three factors of formation, valuation, and return and getting that story completely straight is going to occupy a lot of good minds for a long time.
Baruch Lev I’m usually introduced as the guru of intangibles, and I tend to agree, because I remember well what Peter Drucker said about gurus: journalists like to use the term guru because charlatan is hard to spell. Now, you all came here to hear about and to discuss intangibles, and I’m sorry to tell you that in the circles that I move in, which are mainly financial analysts, accountants, and chief financial officers, there has been some disillusionment with intangibles lately. To put it bluntly, people ask me where all the intangibles have gone. They seemed to have vanished with NASDAQ, with Enron. Chairman Greenspan, for example, in his semiannual report on the state of the economy to Congress in February 2002, spoke eloquently about intangibles (“conceptual assets,” in his terminology) and provided the profound insight that they are not subject to the Baruch Lev is the Philip Bardes Professor of Accounting and Finance at the Stern School of Business, New York University.
Remarks
563
cyclicality of inventory and hence may lessen the severity of recessions. But he too noted that intangibles are very vulnerable because they are linked to the reputation of management, and he mentioned Enron several times, saying that when management’s reputation is destroyed, the intangibles are gone too. This was followed several weeks later by the Wall Street Journal’s major article, which I’m sure many of you read, about the vanishing intangibles, and it particularly mentioned three companies: Enron, Winstar, and Global Crossing. About Enron the article said that its physical assets kept the company going for quite some time, but it was ultimately dragged down by the intangibles, and then Enron completely collapsed. Similarly, they talked about the vanished intangibles of Winstar, Global Crossing, and other tech companies. So this really intrigued me, this vanishing-of-the-intangibles phenomenon, and I decided to look at data. I got the annual financial reports of these three culprits—Enron, Winstar, and Global Crossing—for the last three years and read them carefully with the assistance of a Ph.D. student. What I found was astounding, and I’m sure it will surprise you too, because these three companies were regarded as the epitomy of the new economy— of intangibles-intensive enterprises. The first thing I found was that none of the three companies invested in research and development (R&D) in the last three years of their existence. There is not a mention of significant R&D, not in Enron, not in Global Crossing, and not in Winstar’s financial reports. There is also not a mention of substantial acquired technology, which in accounting goes by the name of “in-process R&D” and has some strange accounting rules attached to it. Enron had all of twenty patents to its name, mostly old patents, on physical assets. There is no mention of the cost of training employees; there is no mention of brand enhancement or of acquiring brands. The only intangible I could find for Enron was a total of 700–800 millions of dollars of software for energy trading, over several years, but this expenditure is dwarfed compared to Enron’s investment in physical assets, about $8 billion in the last three years of its existence. Winstar spent some money on spectrum acquisition, which didn’t give it much of a competitive advantage over others. So my question is this: where are all those intangibles that presumably have vanished? In fact, nothing much vanished, because the intangibles weren’t there in the first place. Intangibles don’t come from thin air, as you all know. The days of manna from heaven are unfortunately long gone. Nowadays, you have to invest in intangibles, in R&D, brands, information systems, human capital, and so on, and these companies, along with many other temporary highfliers of the late 1990s, didn’t invest much in intangibles. Surprisingly, these supposedly new-economy companies invested heavily in tangible assets, so the idea that they were intangible-intensive is absurd.
564
Baruch Lev
Enron, for example, spent in the last three years of its existence about $8 billion on tangible assets, fixed assets—primarily electricity-generating facilities and gas pipelines. Global Crossing spent $2 to $3 billion on fiber optics. So these companies were in fact old-economy companies that collapsed partly because they overspent on physical assets financed by huge debt, and partly because of a failed business model, and not the least because they played games with their financial reports to investors. I think that people are confused between intangibles and hype or fraud. The latter appear to create from time to time market value, at least for a while—recall Enron’s $70 billion capitalization in 2000. This was the case for Enron and firms like it. So if you just look at market minus book value of companies and ascribe the difference to intangibles, without careful examination, you will fall into a trap. The problem is, and here I come to my main point, that we don’t have reliable data to distinguish between the two—real intangibles and hype—in real time. This is a major problem. The data that we have on intangibles, the data that corporations disclose in financial reports, range from seriously deficient to useless, and I’ll get back to this in a moment. Now, I mentioned earlier Chairman Greenspan’s comments about the vulnerability of intangibles. Intangibles, of course, are vulnerable, but I don’t think this is because they are associated or linked with the reputation of managers, as he claimed. If Bill Gates’s reputation were to be tarnished for some reason, I doubt that the huge intangibles of Microsoft related to its R&D, brand, and distribution channels would disappear overnight. Intangibles are vulnerable because, as we all know, their benefits are very difficult to appropriate (spillovers, partial excludability), and also because there are no markets in intangibles, and mainly because the information on intangibles to both corporate outsiders and insiders is wholly deficient. These are the major reasons for the high risk of intangibles. I work a lot with managers. They don’t know much more than investors about intangibles. That’s, to me, the main vulnerability: if you cannot measure or value an asset, you cannot manage and secure its benefits. You, of course, work with R&D data extensively because this is the only intangible that is reported by managers. But even R&D information, as many of you know, is seriously noisy. I have seen cases where corporations include in R&D maintenance of equipment, which they call “R&D,” or quality control expenses, information technology, and so on, all lumped into R&D because it’s nice to have a large R&D figure in your income statement and auditors are not going to argue with you about a classification in the income statement that doesn’t affect net income. So even this item, the only intangible that is publicly disclosed, is rather noisy. I’m working now with a leading chemical company on improving their resource allocation model. In essence, I estimate for them the return on investments in three types of R&D: product, process, and facility cost-saving
Remarks
565
R&D, along with the return on brand enhancement and creation. The raw data that I got from the accounting department on R&D—and these are the sort of data that are published for investors—were very noisy. I mean, they jumped all over the place, and I, like you, know that R&D is a very stable series. It turns out that the R&D numbers I received were affected to some extent by the political power of the R&D chief. If he was strong politically within the organization, he managed to get only small allocations of overhead costs from headquarters. If he was politically weak, he got significant allocations from top management, which increased the cost of R&D. This is just one example of extraneous factors that can affect R&D numbers. Now R&D expenditures are at least reported publicly. All other intangibles are not reported and in many cases are not even internally known to management. Essentially, no company that I know of reports systematic data on information technology, brands, employee training, alliances, and the like. The pharmaceutical company Merck has some 200 alliances with other companies in which they develop and market drugs. You don’t get much information from Merck about what is done within these alliances and how many of them are really performing—not even the share of total Merck revenues that come from alliances. Basically, there is no useful information on alliances. You cannot evaluate the efficiency of this important corporate activity. There was some progress recently that I thought was really significant: after years of pressure, the Financial Accounting Standards Board (FASB) which is the body that sets accounting and reporting standards in the United States, officially added in January 2002 an intangibles disclosure item to its agenda. For those of you who are familiar with what the FASB is doing, this is a substantial development because if it’s an agenda item, they have to come up with some kind of statement, some kind of a report. So this appeared to be promising. Alas, I now see on the FASB web site that they have frozen the intangibles project, given “more urgent” concerns in the wake of Enron and others. My point is that until we get good information from corporations about intangibles, we definitely won’t be able to make a clear distinction between real intangibles that create growth and market values, and the Enron-type impact of fraud and lies and hype on market values. And much of the data that you are using, including for the national accounts, is also partial—and contaminated. I therefore have a friendly and important suggestion for you: having spent lots of time with accounting regulators, both FASB and Securities Exchange Commission (SEC), I know that all they hear from are managers who tell them day and night that they don’t need to disclose anything. Yes, managers generally repeat the same mantra: (a) additional disclosure is extremely costly, (b) nobody needs it, and (c) it opens them to litigation. Almost every letter to the FASB has these three points, as if all the
566
Baruch Lev
letters were dictated by the same person. Accountants often support managers. I think it’s extremely important that the FASB and SEC will hear from you—and by you I mean the Federal Reserve, the Bureau of Economic Analysis, the Census Bureau. Just meet with them, tell them about your information needs. Counterbalance the gagging effect of managers and accountants. Lastly, I want to make one point in passing. I find it somewhat ironic that I have heard the same comments at this conference that I hear in accounting and finance circles. People here said several times, and I’m sure with good reason, that intangibles are very uncertain—that we don’t know much about them and about their consequences, and hence intangibles shouldn’t be considered as assets in the national accounts. People here also said that we need to develop conceptual approaches for intangibles and that maybe R&D should be reported as an asset in satellite accounts, but not more. I hear exactly the same arguments in accounting circles. Intangibles are very uncertain, risky, and we don’t know much about them. We shouldn’t put them on corporate balance sheets—maybe in a footnote. And you know that when something is put in a footnote, it is buried there forever. No one pays much attention to footnotes. I doubt that people pay much attention to satellite accounts either. I wish they would. But this is a sure way to bury an issue. You all hear now in the wake of Enron about the stock options debate, and you probably know that the FASB went to war in the mid-1990s about stock options, when they wanted companies to report stock options as an expense in the income statement. This created incredible antagonism, particularly by high-tech companies. Consequently, the FASB backed off, saying, the stock options information will not be in the income statement, but all of it will be in a footnote. Those who believe in market efficiency smiled and said, who cares whether the information is on page 2 or page 5 of the report? As long as the information is disclosed, it’s fine. Investors are astute, and they’ll take the cost of stock options from the footnote in page 5 and subtract it from earnings in page 2, and they’ll have what the FASB wanted in the first place. But managers knew better, and they were happy with the footnote solution. They couldn’t care less when the information is in a footnote because they know that if it’s in a footnote, it’s buried. Few care about it. So if someone—and this is my second message—thinks seriously about intangibles, reporting them in some kind of a subsidiary or satellite account, at least from the private sector’s experience, is like burying the issue. Better to face the issue: many intangibles, like patents, software, or brands, have attributable benefits and therefore can be valued. Don’t hide behind “we still don’t know much about intangibles, we need concepts.” We will never know much about intangibles, and we always will need better concepts. To paraphrase Nike, just do it! Face head-on the intangibles issue.
Remarks
567
Erik Brynjolfsson and Lorin Hitt Herbert Simon once remarked that “economics is only a one-digit science.” In the case of intangibles, even that standard sometimes seems to be too optimistic. The difficulties we are facing in measuring the quantity or price of intangibles with much precision might not be important if they didn’t account for such a large share of the economy. However, all indications are that intangibles account for trillions of dollars of capital in the U.S. economy today (Nakamura 1997; Brynjolfsson and Yang 1999). What’s more, knowledge, business processes, organizational capital, patents, trademarks, brands, and other intangible are likely growing as a share of the nation’s capital stock. We have studied intangibles related to information technology (IT) especially closely and found evidence that they may exceed the value of computer hardware itself by an order of magnitude, or more. In particular, one of the biggest categories of IT investment is process-enabling IT, including systems for enterprise resource planning (ERP), customer relationship management (CRM), and supply chain management (SCM), among others. According to McAfee (2001), these systems account for the majority of IT spending in large corporations. These projects inevitably require extensive redesign of business processes and employee training, in conjunction with expenditures on new hardware and software. As we have noted in the past, the real costs, and the real value, from IT projects are not in the purchase of the hardware or software but rather in the complex and sometimes subtle interactions of capital, labor, and information. There is a virtually infinite number of combinations and sequences in which to arrange the steps in most business processes, let alone the bits of information, the people, and the tangible components. Each of these combinations is likely to yield a different outcome, and some outcomes are vastly more valuable than others. But identifying, and then implementing, the more effective combinations is incredibly costly. It is a process of discovery and innovation. It is also a process of imitation and learning. What’s more, it occurs on many different levels. A firm’s chief executive officer, Erik Brynjolfsson is the George and Sandi Schussel Professor of Management at the Sloan School of Management, Massachusetts Institute of Technology. Lorin Hitt is the Alberto Vitale Term Associate Professor of Operations and Information Management at the Wharton School, University of Pennsylvania. We are grateful to Carol Corrado, John Haltiwanger, and Dan Sichel for encouraging us to write up these thoughts and for helpful discussions before, during, and after the National Bureau of Economic Research/Conference on Research in Income and Wealth conference: Measuring Capital in the New Economy. Generous funding was provided by the Center for eBusiness at MIT and the National Science Foundation (grants IIS-0085725 and IIS9733877).
568
Erik Brynjolfsson and Lorin Hitt
chief information officer, chief operating officer, chief financial officer, and others must make broad decisions about a company’s strategy. But perhaps more important, if less widely noted, are the myriad small decisions, adjustments, ideas, and changes made by millions of employees as they work with the new systems or invent new ways to do their jobs, exploiting new technologies or new business processes. If it is true that people only think when confronted with a problem, then the implementation of new technologies has generally provided some of the most fertile opportunities for people at all levels of organizations to develop organizational innovations. Indeed, while many of the micro-innovation may be relatively insignificant, the fact that productivity often peaks only years after a new IT system is first installed (see, e.g., Brynjolfsson and Hitt 2003) suggests that the learning-by-doing process is important. Collectively, the value of the big and little process innovations, the explicit ones, and those that are only known implicitly by the firm and its workers, can be called “organizational capital,” which is a key type of intangible asset in modern economies. We have come to the conclusion that for many purposes it is useful to treat organizational capital on par with other types of capital. Indeed, we have run production functions, labor and capital demand equations, and market value equations in this way and found that organizational capital often behaves much like other types of capital, to the extent we are able to measure it. In the data we analyzed, even the direct spending on the business process redesign and employee training dwarfs the technology spending. For instance, in joint work with David Fitoussi, we systematically interviewed managers of ninety-one such projects initiated since 1997 and found that computer hardware accounted for just 11 percent of the total initial costs. Software, while important, accounted for only 18 percent of the costs, with the rest being intangibles like business process reengineering, deployment, and training. As might be expected, most of the intangible costs were treated as current expenses, while most of the IT hardware was treated as capital investment. In some ways this is ironic, because in many cases the business processes have a significantly longer economic life than the hardware used. Interestingly, the 10-to-1 ratio of hardware to other intangibles is in the same ballpark as the ratio found in the work of Brynjolfsson and Yang (1999), which looked at the value implicitly attributed computer-related intangibles using a series of market value regressions. If the ratios we found in our research, using both cost- and benefit-oriented approaches, also hold for the economy as a whole, then there are on the order of $2 trillion worth of IT-related intangibles in the United States today, and this number is growing faster than the rest of the economy. Furthermore, there are many other intangibles that are not related to the computerization of firms.
Remarks
569
Measuring Intangibles It is often remarked that the market value of firms—the sum of equity and debt—is much larger than the book value of the firm’s assets. Other related metrics based on variants of Tobin’s q tell a similar story. At least some of that gap is undoubtedly due to the value of intangibles. Why are so many intangibles missing from the book value of firms? There are at least two reasons. First, the value of intangibles is typically highly uncertain, making it difficult to place a reliable value on them, whether by accountants, statisticians, or even markets. This point has been thoroughly made by a number of authors, most notably Baruch Lev (see, e.g., Lev 2001). As pointed out by Nadiri (1993), this is compounded by the fact that the marginal costs are often much lower than average costs for general knowledge, and as a result competitive markets are difficult to establish. Accountants are loath to include an asset on a balance sheet when its market value may fluctuate widely from year to year, or even month to month. We have no doubt that the variability in the value of intangibles is an important reason for leaving so many of them off the balance sheets of corporations. However, intangibles are not entirely unique in this respect. Other assets also fluctuate greatly in value, and yet they are properly included among the assets of a firm. After all, the price of oil, and hence the value of oil reserves, can change dramatically in less than a year as well. Accordingly, we would like to note another reason for the gap between the book value of a firm and its market value. As we noted in our paper with Shinkyu Yang (Brynjolfsson, Hitt, and Yang 2003), the value of any asset is contingent on the state of the world. In some states of the world, a glass of water is much more valuable to me than in other states. In the absence of a complete set of state-contingent markets, different people will have different values for different assets. In particular, an asset may have a very different value to equity holders than to debt holders. Equity holders are likely to care about the expected returns of a firm and its assets across all future states of the world. They will purchase firms and assets with greater expected returns, to the extent to which they are the residual claimants of those returns. In contrast, debt holders are entitled to a relatively fixed rate of return. Their returns vary from this rate only in those bad states in which the firm fails to make its payments. In those cases, debt holders may be entitled to assume ownership of the firm’s assets. By definition, these are exactly the states in which the firm is not generating a great deal of cash flow as a going concern. Hence, the organizational capital of the firm—the complex and subtle interactions of capital, labor, and information—is likely to have an especially low or even zero value in those states. In contrast, other as-
570
Erik Brynjolfsson and Lorin Hitt
sets like unspecialized furniture, equipment, or oil reserves are likely to be nearly as valuable as they would be in other states, discounting any economywide cyclical effects. Because there are (at least) these two basic types of claimants on the firm’s returns, equity holders and debt holders, it thus may make perfect sense to have (at least) two distinct values for the firm. One, the market value, represents the expected returns of firm and its assets across all states of the world. This is what the equity holders care about most. The other, the book value, gives an indication of what the firm would be worth in those bad states of the world where it failed to generate sufficient cash flows to cover its required debt payments. This is the value that debt holders care about most. A gap between the market value and the book value of the firm can thus be viewed as, in part, a reflection of the value of its organizational capital. Note that this gap would be present even if we could measure with great precision the value of the organizational capital in both good and bad states. Conversely, even assets with widely varying and uncertain valuations might properly be included in the book value of a firm insofar as debt holders expected to be able to salvage them if and when they took ownership of them. As it turns out, organizational capital often has the properties of both being difficult to measure and, almost by definition, being nearly valueless if the firm ceases as a going concern and its assets are separated and sold. In fact, the interaction of these effects may even amplify their importance: when there is more discretion in valuing an asset, it becomes less reliable as collateral for a creditor, regardless of the expected value across states. However, it is worth noting that these are logically distinct characteristics and that this distinction can help us understand why and when intangibles may be difficult to measure and manage. On one hand, this perspective suggests that although intangible assets may not survive the demise of the company that owns them, they need not necessarily be difficult to measure as long as the firm is a going concern. In fact, this is something that equity investors must do in real time, with real dollars at stake, whenever they trade on the stock market. On the other hand, this perspective reminds us that even if we were to somehow solve the measurement problem, we would not necessarily eliminate the gap between the market and book values of firms. An organizational economics perspective, not just a measurement framework, will ultimately be needed for a more complete understanding of the role of intangibles. Revealing Intangibles Through Indirect Measurement These properties lead to different approaches for attempting to better understand the role of computers and their associated intangible assets.
Remarks
571
One way to circumvent the measurement problem is to infer the role of intangibles from their resulting outcomes on firms. Consider an analysis in which we relate the market value (MV) of a collection of firms to the balance sheet assets they possess, such as ordinary capital (K ), computer capital (C), and other assets such as cash or accounts receivable (O). In addition, assume there is an additional set of computer-related intangible assets (I) that could be measured in the same terms as the other assets. To simplify discussion, we assume that all variables are measured as deviations from the sample mean, eliminating the need for an intercept in the regression, and that no additional control variables are needed for proper specification. If these were indeed all the relevant assets and the firm was in long-run equilibrium, then Tobin’s q theory suggests that a dollar of additional assets should be associated with approximately a dollar of market value. This leads to a regression of the form (1)
MV 1K 2C 3O 4I ε,
where we expect the estimates to be 1 2 3 4 1. Unfortunately, since we can rarely directly measure the intangible assets, we may only be able to run the regression without the intangibles variable: (2)
MV 1K 2C 3O ε.
Can this regression be useful in developing a greater understanding of the role of intangibles? Fortunately, the answer is yes, since the results of the two regressions are closely related. Suppose further that the relevant intangibles are correlated with computer capital but not the other capital components (this can be verified if one can measure intangibles, and it appears to be true for some we can measure—see Bresnahan, Brynjolfsson, and Hitt 2002). In this case, standard omitted variables arguments suggest that (3)
2 2 (CC)1(CI)4 .
The contribution of intangibles will cause the coefficient on the computer term to exceed its “true” value. More specifically, the size of this bias is just the coefficient one would get of a regression of computers on intangibles (think of this as the dollar amount of intangibles per dollar of computers) multiplied by the market value of intangibles per dollar (4). Even though the model cannot separately determine the “price” (4) and quantity (I) of intangibles independently (see Hall 2001 for an approach for performing this calculation using additional data), it can provide an estimate of the principal quantity of interest—the total value of the computerrelated intangibles. Results from estimating a variant of equation (2) with additional controls show that the coefficients on other capital components (K and O) are
572
Erik Brynjolfsson and Lorin Hitt
consistently around the theoretical value of 1, while the estimates of the computer market value range from about 4 to as high as 20 depending on the specification (Brynjolfsson and Yang 1999; Brynjolfsson, Hitt, and Yang 2003). This leads to our observation that the value of computerrelated intangibles may exceed the value of computer assets by an order of magnitude. Different econometric techniques can be applied to variants of equation (2) to control for unobserved firm heterogeneity such as fixed effects and long differences. However, to the extent that intangibles may be slower than other capital components to adjust to firm investments or economic conditions, this type of analysis may actually remove the true effect of intangibles, leading to a lower estimate of the computer coefficient. In the extreme case where the level of intangibles is fixed over the sample period, these analyses would miss the role of intangibles entirely. Similar arguments apply to instrumental variables techniques where the equation is estimated with instruments correlated with computers but uncorrelated with the unobserved effects, which include, among other things, intangible assets (Cummins, chap. 2 in this volume). Unfortunately, while instruments will help us get close to the true value of the computer investment alone, we cannot learn much about computer-related intangibles—which is precisely what makes the analysis interesting in the first place. That is not to say that we cannot learn more about the properties of intangibles from these types of specifications. In fact, results using first differences tend to yield estimates of the computer coefficient indistinguishable from 1, while longer differences (greater than three years) yield estimates comparable to the ordinary least squares estimates (Brynjolfsson, Hitt, and Yang 2003). Similar results also appear in a productivity framework—when we conduct productivity growth analyses using one-year differences, computers show a rate of return indistinguishable from their capital costs, the theoretical value. However, the coefficients steadily rise as longer difference lengths are considered to as much as five times as large. Moreover, if one considers the market value to be the discounted future value of productivity gains, a productivity contribution of intangibles of two to four times larger than their theoretical value is not unreasonable, since the future value of this amount of additional productivity contribution broadly consistent with a market value intangibles an order of magnitude larger than the direct value of computers. Conversely, as was noted at the time and in subsequent discussions, the apparently high levels of computer-related intangible assets founded in data through 1997 “predicted” the subsequent surge in productivity, as conventionally measured. One test of whether the intangibles were real was their ability to create real output in subsequent years, beyond that accounted for by tangible inputs of capital and labor.
Remarks
573
Direct Measurement of the Value of Intangibles Of course, it would be much more satisfying to directly measure these intangible assets so that their properties can be revealed and studied directly without recourse to econometric argument. Sometimes there is no substitute for gathering the requisite data. As mentioned earlier, it is known that as much as 80 percent of the expenditure in large-scale IT projects is for business process design and other organizational change investments. However, this raises the question of what changes these firms are actually making with these investments. Although it has proven difficult to develop a general framework for measuring all intangibles, the task can be greatly simplified if it can be limited to a well-defined set of practices where there is a strong theoretical belief that they are related to computers. In particular, decentralized organizational decision making and high levels of human capital have frequently been argued to be complementary to computer investment (see Brynjolfsson and Hitt 2003 for a survey of these studies). Furthermore, there are reasonable measures of these constructs that can be used in organizationlevel surveys, largely developed from the human resources literature on “high-performance work systems” (see a survey in Ichniowski et al. 1996). In 1996, we conducted a survey of 400 large firms focusing on these two aspects of organizational design and found a collection of these practices (which are summarized by a single variable that we labeled ORG)1 to have some interesting properties. In particular, the ORG variable appears to be correlated with computer capital stock but not with the stocks of ordinary capital or other assets, consistent with our suggestion that there is a unique relationship between computers and these practices. When ORG is entered alone in a market value equation (replacing I in equation [1]) it generally has a positive coefficient. However, more interestingly, it has a very strong interaction with computer investment—firms that are both high in ORG and have high IT investment have a substantially higher market value than other firms (Brynjolfsson, Hitt, and Yang 2003). This interaction is also found in productivity analyses (Breshanan, Brynjolfsson, and Hitt 2002). Thus, at least for the practices that we are able to measure, it is the combination of computer capital and organizational capital that appears to be associated with significant value. Clearly, decentralized decision making and human capital are not the only potential complements to computers. Other organizational investments such as business processes, incentive systems, and new ways of interacting with buyers and suppliers play a substantial role in many of the 1. Specifically, ORG is derived from a principal components analysis of seven survey items covering organizational practices such as the use of self-managing work teams, decentralization of production decisions, screening for human capital and training (see Bresnahan, Brynjolfsson, and Hitt 2002 for further details).
574
Erik Brynjolfsson and Lorin Hitt
success stories of computer-enabled organizational design (e.g., Dell, WalMart, Cisco, Amazon, and many others). However, broadly speaking, a common thread in many of these practices is that they leverage the value of information, which is captured, stored, processed, and transmitted by IT. Following this logic, we conducted a more recent survey in 2001 covering a wide variety of practices of “information-intensive” firms. Our preliminary results suggest that additional practices such as widespread information access within the firm, clearly communicated organizational goals, performance-oriented incentives, and well-defined corporate culture tend to appear in firms that are both technology intensive and highly productive. However, one can only imagine that this is still the tip of a much larger iceberg of potentially successful practices, including practices unique to particular industries or activities or practices that may be valuable only in certain settings and economic conditions. Conclusion Considerable work is still required to better understand these intangibles, but if we are to fully understand the role of computer investment in the economy this is where we will have to look. While we can measure the amount of IT entering the economy fairly well, without a better understanding of intangibles we will miss the largest contributors of both costs and benefits of technology investments. While it is a well-worn cliché that economic measurement lags behind the activity of the economy, this is not the first time that technological progress has required a rethinking about the way the economy is measured. Economists likely faced similar quandaries in understanding the role of capital when it became apparent that we were shifting from an agrarian to an industrial economy. If we are becoming members of an “information economy,” it would be ironic if we had less information about the real factors of production than our predecessors who pioneered growth accounting in the twentieth century. Fortunately, intangibles need not be beyond calculation. They may be immense, but they are not immeasurable.
References Bresnahan, Timothy, Erik Brynjolfsson, and Lorin M. Hitt. 2002. Information technology, workplace organization, and the demand for skilled labor: Firmlevel evidence. Quarterly Journal of Economics 117 (1): 339–76. Brynjolfsson, Erik, and Lorin M. Hitt. 2003. Computing productivity: Firm-level evidence. Review of Economics and Statistics 85 (4): 793–808. ———. 2004, forthcoming. Intangible assets and the economic impact of computers. In Transforming enterprise, ed. William Dutton, Brian Kahin, Ramon O’Callaghan, and Andrew Wyckoff, Cambridge: MIT Press.
Remarks
575
Brynjolfsson, Erik, Lorin M. Hitt, and Shinkyu Yang. 2002. Intangible assets: How the interaction of computers and organizational structure affects stock market valuations. Brookings Papers on Economic Activity, Macroeconomics: 137–99. Brynjolfsson, Erik, and Shinkyu Yang. 1999. Intangible benefits and costs of computer investments: Evidence from the financial market. Proceedings of the International Conference on Information Systems, 147. 14–17 December, Atlanta, Georgia. Hall, Robert E. 2001. The stock market and capital accumulation. American Economic Review 91 (5): 1185–1202. Ichniowski, C. T. Kochan, D. Levine, C. Olson, and G. Strauss. 1996. What works at work: Overview and assessment. Industrial Relations 35 (3): 299–333. Lev, B. 2001. Intangibles: Management, measurement, and reporting. Washington, DC: Brookings Institution. McAfee, A. 1999. The impact of enterprise information systems on operational effectiveness. PhD diss., Harvard Business School. Nadiri, I. 1993. Innovation and technological spillovers. NBER Working Paper no. 4423. Cambridge, MA: National Bureau of Economic Research. Nakamura, L. I. 1997. The measurement of retail output and the retail revolution. Paper presented at the CSLS Workshop on Service-Sector Productivity and the Productivity Paradox. Ottawa, Canada. April.
Contributors
John M. Abowd CISER 391 Pine Tree Road Cornell University Ithaca, NY 14850 Katharine G. Abraham Joint Program in Survey Methodology 1218 LeFrak Hall University of Maryland College Park, MD 20742 Eric J. Bartelsman Faculty of Economics and Business Administration Free University of Amsterdam 1081 HV Amsterdam The Netherlands Susanto Basu Department of Economics University of Michigan, Lorch Hall 611 Tappan Street Ann Arbor, MI 48109-1220 Sandra E. Black Department of Economics University of California at Los Angeles 9373 Bunche Hall Los Angeles, CA 90095-1477
Barry Bosworth The Brookings Institution 1775 Massachusetts Avenue, NW Washington, DC 20036 Timothy F. Bresnahan Stanford University Department of Economics 579 Serra Mall Stanford, CA 94305-6072 Erik Brynjolfsson MIT Sloan School, E53-313 50 Memorial Drive Cambridge, MA 02142 Carol Corrado Division of Research and Statistics Board of Governors of the Federal Reserve System 20th Street and Constitution Avenue, NW Washington, DC 20551 Jason G. Cummins Division of Research and Statistics Board of Governors of the Federal Reserve System 20th Street and Constitution Avenue, NW Washington, DC 20551
577
578
Contributors
W. Erwin Diewert Department of Economics University of British Columbia #997-1873 East Mall Vancouver, BC V6T 1Z1, Canada
Lorin Hitt The Wharton School University of Pennsylvania 571 John Huntsman Hall Philadelphia, PA 19104
Mark Doms Federal Reserve Bank of San Francisco 101 Market Street San Francisco, CA 94105
Mun S. Ho Kennedy School of Government Room B402 79 John F. Kennedy Street Cambridge, MA 02138
Barbara M. Fraumeni Bureau of Economic Analysis 1441 L Street, NW Washington, DC 20230 Bruce T. Grimm Bureau of Economic Analysis 1441 L Street, NW Washington, DC 20230 Bronwyn H. Hall Department of Economics 549 Evans Hall, #3880 University of California–Berkeley Berkeley, CA 94720-3880 Robert E. Hall Hoover Institution Stanford University Stanford, CA 94305-6010 John Haltiwanger Department of Economics University of Maryland College Park, MD 20742 Lars Peter Hansen Department of Economics The University of Chicago 1126 East 59th Street Chicago, IL 60637 Michael J. Harper Bureau of Labor Statistics 2 Massachusetts Avenue, NE Washington, DC 20212-0001 John C. Heaton Graduate School of Business University of Chicago 1101 East 58th Street Chicago, IL 60637
Charles Hulten Department of Economics University of Maryland Room 3105, Tydings Hall College Park, MD 20742 Ron Jarmin U.S. Census Bureau CECON 8905 Presidential Pkwy, WP2-211 Upper Marlboro, MD 20772-2653 Dale W. Jorgenson Department of Economics Littauer 122 Harvard University Cambridge, MA 02138 Julia Lane The Urban Institute 2100 M Street, NW Washington, DC 20037 Paul Lengermann Division of Research and Statistics Board of Governors of the Federal Reserve System 20th Street and Constitution Avenue, NW Washington, DC 20551 Baruch Lev Stern School of Business New York University Suite 312, Tisch Hall 40 West Fourth Street New York, NY 10012
Contributors Nan Li Department of Economics University of Chicago 1126 East 59th Street Chicago, IL 60637
Suresh Radhakrishnan School of Management, SM41 University of Texas at Dallas Box 830688 Richardson, TX 75083-0688
Frank R. Lichtenberg Graduate School of Business Columbia University 3022 Broadway, 614 Uris Hall New York, NY 10027
Kristin Sandusky Room 2138 FB 3 LED Program U.S. Census Bureau 4700 Silver Hill Road Suitland, MD 20746
Lisa M. Lynch Fletcher School of Law and Diplomacy Tufts University Medford, MA 02155 Kristin McCue U.S. Census Bureau CECON 8905 Presidential Pkwy, WP2-211 Upper Marlboro, MD 20772-2653 Kevin McKinney LED Program U.S. Census Bureau 4700 Silver Hill Road Suitland, MD 20746 Brent R. Moulton Bureau of Economic Analysis 1441 L Street, NW Washington, DC 20230 Sumiye Okubo Bureau of Economic Analysis 1441 L Street, NW Washington, DC 20230 Edward C. Prescott Research Department Federal Reserve Bank of Minneapolis 90 Hennepin Avenue Minneapolis, MN 55401-1804
579
Kathryn Shaw Graduate School of Business Littlefield 339 Stanford University Stanford, CA 94305-5015 Daniel Sichel Division of Research and Statistics Board of Governors of the Federal Reserve System 20th Street and Constitution Avenue, NW Washington, DC 20551 Kevin J. Stiroh Banking Studies Function Federal Reserve Bank of New York 33 Liberty Street New York, NY 10045 David B. Wasshausen Bureau of Economic Analysis 1441 L Street, NW Washington, DC 20230 Frank C. Wykoff Department of Economics 425 N. College Avenue 204 Carnegie Building Pomona College Claremont, CA 91711
Author Index
Abowd, John M., 5, 155, 158n3, 165, 168, 409n4 Acemoglu, Daron, 211 Advisory Commission to Study the Consumer Price Index, 13 Aghion, P., 77, 211 Aizcorbe, A., 333, 335n11, 385 Arellano, Manuel, 58 Arora, Ashish, 280n15 Arrow, Kenneth J., 76 Arthur, Jeffrey, 209 Ashenfelter, Orley, 547n8 Ashman, David, 547n8 Athey, Susan, 212 Atkeson, A., 75, 76, 112, 117, 118n2, 119, 206n1 Babbage, C., 490, 499 Bahk, Byong-Hyong, 255, 256 Bailey, Thomas, 209, 211 Baily, Martin Neil, 14n3, 403n1, 403n2 Bansal, R., 125, 127, 129, 130, 130n13, 136 Barron, J., 218 Bartel, Ann, 32n19, 209, 212, 218 Basu, Susanto, 450n25 Batt, Rose, 211 Becker, Brian E., 209, 215t, 218 Becker, Gary S., 76, 155 Beidelman, C., 516, 516n54 Bell, P. W., 487n16 Berg, Norman, 212 Berg, Peter, 211
Berger, M., 218 Berndt, Ernst R., 428, 473, 490n23, 493n26, 537n67 Bernstein, Jeffrey I., 69 Black, D., 218 Black, Sandra E., 5, 32n19, 209, 210, 211, 212, 216, 218, 221, 227 Blades, Derek, 407, 427 Blair, Margaret M., 13n2, 205 Blanchard, O. J., 134 Blasi, Joseph, 208 Blau, Francine D., 199 Blundell, Richard, 58 Bõhm-Bawerk, E. V., 484n8, 506 Boldrin, M., 116 Bond, Stephen R., 51, 56, 58, 69, 160 Boning, Brent, 208 Bosworth, Barry, 13n2 Box, G. E. P., 145 Brealey, Richard A., 61 Bresnahan, Timothy, 53n5, 102, 104, 107n6, 154, 209, 211, 573 Brown, S., 91n22 Brynjolfsson, Erik, 2, 14, 26, 53n5, 59, 60, 60n11, 102, 104, 112, 154, 159, 209, 211, 212, 382, 403n2, 468n29, 567, 568, 569, 572, 573 Burgess, Simon, 162, 164, 165 Campbell, John Y., 125, 126, 140, 468n29 Canning, J. B., 490, 513n52, 515, 516, 523n62
581
582
Author Index
Capelli, Peter, 211 Caroli, Eve, 209, 211, 212, 215t, 217, 218 Carson, Carol S., 276, 304, 308n64, 316, 321 Carter, William, 211 Cartwright, D. W., 382n16 Ceccagnoli, Marco, 280n15 Chan, L. K. C., 74n1 Christensen, Lauritis R., 238n1, 469, 475, 484n9, 489n17, 489n19, 515n53 Christiano, L., 116 Church, A. H., 490, 490n22, 495n31, 499, 533n66 Cochrane, J. H., 115, 116, 123 Cockburn, Iain, 14n3, 102 Cohen, R., 125 Cohen, Wesley M., 280 Cole, R., 382n16, 382n20 Colecchia, Alessandra, 402f Corrado, Carol, 3, 13n1, 335n11, 385, 452 Creecy, Robert H., 165 Cummins, Jason G., 4, 51, 69, 160 Daniel, K., 140 Daniels, M. B., 505 Delaney, John, 209 Denison, Edward F., 11, 301n52, 301n53 Diewert, W. Erwin, 7, 475, 484n9, 485n12, 486n13, 487n16, 490n22, 503n39, 504nn41–43, 505, 507, 510, 512, 513, 518, 520, 521, 522n61, 523, 524 DiMasi, Joseph A., 241 Dittmar, R., 125, 127, 129, 130, 130n13, 136 Domar, Evsey, 459n27, 460, 476 Doms, Mark, 6, 324, 324n3, 332, 335n11, 384, 384n21, 385 Dougherty, S. M., 320 Dulberger, Ellen, 386, 404n3, 450n26 Dunlop, John, 209 Dunn, Wendy E., 552n15 Edwards, E. O., 487n16 Eisner, Robert, 280n16, 284 Engle, R. F., 129 Epstein, L. G., 138, 493n26 Ericson, R., 76 Evenson, R. E., 75, 76 Fama, E. F., 79, 112, 120, 121, 122, 143 Fast, Norman, 212 Feenstra, Robert C., 551n12 Feldstein, Martin, 546 Fells, J. M., 490n21, 490n22, 510
Fernald, John G., 450n25 Fisher, I., 507 Fisher, J., 116 Fitoussi, David, 568 Flamm, Kenneth, 331, 333 Forman, Chris, 324, 324n3, 332, 384, 384n21 Foster, Lucia, 172n11 Fox, Kevin J., 486n13, 490n22, 517n58 Fraumeni, Barbara M., 6, 155, 276n2, 301n52, 302n57, 410, 420, 427, 436, 437n14, 456, 460, 469, 473 French, K. R., 79, 112, 120, 121, 121n7, 122, 143 Fuss, Melvyn A., 428 Gandal, Neil, 382 Garcia-Peñolosa, 211 Garcke, E., 490n21, 490n22, 510 Gardner, Jennifer M., 199 Gittleman, Maury, 215t Gollop, Frank M., 155, 301n52, 410, 436, 437n14, 456, 460, 469, 473 Gordon, Robert, 364, 403n1 Gore, Al, 159 Gort, Michael, 255, 256 Granger, W. J., 129, 130 Griliches, Zvi, 2, 11, 14n3, 15, 32n32, 36, 58, 69, 78, 101n1, 238, 238n1, 239, 246, 279n14, 320n3, 420, 436, 473, 474, 489n17, 489n19, 491, 496, 515n53, 546 Grimm, Bruce T., 7, 26, 276, 304, 308n64, 316, 321, 324, 331, 375n9, 376n12, 385, 385n22 Groshen, Erica L., 157, 199, 200 Grossman, Elliott S., 301n53 Grunfeld, Y., 113 Gu, Feng, 159, 160 Gullickson, William, 13n1, 452, 476, 477 Hall, Bronwyn H., 14n3, 51, 58, 69, 149, 302n56, 546 Hall, Robert E., 14, 14n3, 47, 80, 112, 119, 119n5, 120, 142, 150, 159, 160, 450n25, 468n29, 516, 543, 545, 552n14, 571 Haltiwanger, John C., 5, 154, 162, 172n11 Hand, John R. M., 69 Hansen, Lars Peer, 4, 114, 123, 126, 127, 127n9, 140, 140n16 Harper, Michael J., 13n1, 452, 476, 477, 490n23, 493n26, 537n76 Harrison, J., 114
Author Index Hatfield, H. R., 573n75 Hausman, Jerry, 331 Hayashi, Fumio, 51, 113, 150 Heaton, John C., 4 Heckman, James J., 43 Hempell, Thomas, 53n5 Henderson, Rebecca, 102 Herman, Shelby W., 426 Hicks, J. R., 487n16, 508n46, 524n63 Hill, Peter, 481–82n5, 487n15, 494, 494n30, 495n32 Hill, R. H., 494, 494n30 Hill, T. P., 494, 494n30 Hitt, Lorin M., 2, 14, 26, 53n5, 59, 60, 60n11, 102, 104, 112, 154, 159, 209, 211, 212, 403n2, 468n29, 568, 569, 572, 573 Ho, Mun Sing, 7, 436 Holdway, M., 376n11, 382n15 Horrigan, Michael, 215t Hotelling, H., 494n29, 513n52 Howitt, P. W., 77 Hulten, Charles R., 3, 12, 13, 62, 255, 484n9, 485n12, 487n16, 489n17, 504, 504n43, 506, 516, 516n54, 516n55, 517n57, 539n78, 543, 544n3 Hunter, Larry W., 211 Huselid, Mark A., 209, 213, 214, 215t, 218
583
Kelley, Maryellen, 209 Kemerer, Chris F., 382 Kendrick, John, 11, 280n16, 301n52, 301n53 Khan, Mosahid, 23n10 Khurshid, A., 333 Knittel, Christopher R., 551n12 Kohli, Ulrich, 517n58 Kramarz, Francis, 155 Kremer, Michael, 158, 211 Kreps, D., 114 Krivelyova, Anya, 211, 216 Krizan, C. J., 172n11 Kruse, Douglas, 208
Jaffe, Adam, 69, 302n56, 320n3 Jagannathan, R., 123 Jarmin, Ron, 5, 154 Jermann, U. J., 116 Jones, Charles I., 238n1 Jorgenson, Dale W., 7, 11, 15, 155, 238n1, 301n52, 325, 364, 403n1, 404, 410, 420, 427, 436, 437n15, 456, 460, 469, 473, 474, 475, 484n9, 489n17, 489n19, 491, 504n43, 515n53, 516n54, 543 Jovanovic, B., 76 Joyce, Mary, 215t Juhn, Chinhui, 156
LaFleur, Timothy J., 385 Lakonishok, J., 74n1 LaLonde, Robert, 547n8 Landefeld, J. Steven, 364, 385n22, 473, 475 Landes, Elisabeth M., 32 Lane, Julia, 5, 157, 162, 164, 165, 200 Larkins, Daniel, 393n31 Lau, Lawrence J., 475 Lawrence, D. A., 484n9, 504n43 Lawrence, Robert Z., 403n2 Lawson, Ann M., 394n32 Lazear, Edward, 209n3 Lebow, David E., 13 Lengermann, Paul, 5, 155, 157, 168 Lettau, M., 129, 136 Lev, Baruch, 2, 4, 13n2, 16n6, 23n10, 159, 160, 409n4, 569 Levine, David I., 200 Li, Nan, 4 Lichtenberg, Frank R., 6, 239, 256n26 Littleton, A. C., 533n66 Lo, K., 91n22 Lochner, Lance, 43 Lucas, R. E., 113 Ludvigson, S., 129, 136 Lum, Sherlene K. S., 415n14, 427 Lundblad, C., 125, 127, 129, 130, 130n13, 136 Lynch, Lisa M., 5, 32, 209, 210, 211, 212, 216, 218, 221, 227 Lys, T., 91n22
Kamarz, Francis, 165 Kandel, E., 209n3 Kaplan, David S., 200 Kehoe, P. J., 75, 76, 112, 117, 118n2, 119, 206n1
MacKinlay, A., 123 Maddison, Angus, 505, 507 Mairesse, Jacques, 58, 78, 101n1 Malcomson, James, 207 Malerba, Franco, 107n6
Ichniowski, Casey, 208, 209, 209n2, 212, 214, 215t, 236 Ilg, Randy E., 199 Inoue, Tohru, 51
584
Author Index
Mansfield, Edwin, 277n5 Margolis, David, 155 Marshall, D., 140 Maskin, Eric, 158, 211 Matheson, E., 494n29, 499, 510, 513, 513n52, 515 McAffee, A., 567 McCue, Kristin, 5 McGrattan, Ellen R., 14, 43, 55n7, 112, 119, 119n3, 119n4 McKinney, Kevin, 5, 155 McKinsey Global Institute, 78n8, 403n1 Milgrom, Paul, 101n2, 209n3 Moses, Karin E., 385 Moulton, Brent R., 7, 21n9, 381n14, 385 Moyer, Brian C., 415n14, 427 Moylan, Carol E., 276, 304, 308n64, 316, 321, 386, 393n31 Murphy, Kevin M., 156 Musgrave, John C., 31 Myers, Stewart C., 61 Nadiri, M. Ishaq, 69, 279n14 Nakamura, Leonard, 2, 14, 28, 29, 29n15, 280n16, 567 Neumark, David, 211 Nordhaus, William D., 13, 239, 364 Ohlson, J., 90 Okubo, Sumiye, 6, 276n2 Oliner, Stephen D., 301n52, 364, 382n18, 403n1, 404n3, 450n26, 460 Olley, Steve, 101n1 Organization for Economic Cooperation and Development (OECD), 14, 23n10 Osterman, Paul, 211, 214, 215t Oulton, N., 502–3n38 Pakes, A., 76, 101n1 Parente, Stephen L., 43 Parker, Robert P., 21n9, 26, 375n9, 376n12, 473, 475 Paton, W. A., 533n66 Pierce, Brooks, 156, 200 Pigou, A. C., 502 Polk, C., 125 Preinreich, G. A. D., 90 Prennushi, Gabrielle, 209, 212 Prescott, Edward C., 14, 43, 50n3, 55n7, 76, 112, 113, 119, 119n3, 119n4 Quah, D., 134
Radhakrishnan, Suresh, 4, 409n4 Richard, J., 114 Roberds, W., 126, 127, 127n9 Roberts, John M., 13, 101n2, 209n3 Robinson, Brooks B., 393n31 Romer, Paul, 238, 254 Rones, Phillip L., 199 Rosen, S., 76 Rosenfield, Andrew M., 32 Rothschild, Michael, 546 Sakellaris, Plutarchos, 256 Salmon, Laurie, 157, 200 Sandusky, Kristin, 5 Sandy, Carola, 211 Santos, T., 125, 129, 136, 136n15 Sargent, T. J., 126, 127, 127n9 Schere, Frederick, 278n13 Schmalenbach, E., 516 Schreyer, Paul, 402f, 407, 427, 452, 467, 490n23, 495n32 Scott, M., 502–3n38 Seskin, Eugene P., 21n9, 393n31 Shapiro, Matthew D., 2, 13, 450n25 Shaw, Kathryn, 208, 209, 209n2, 212, 236 Shiller, Robert J., 55, 125, 126, 468n29 Shleifer, A., 151 Sichel, Daniel E., 3, 301n52, 364, 382n18, 403n1, 404n3, 450n26, 460 Siegel, Donald, 239 Siegel, Jeremy, 55 Sims, C. A., 130, 145, 145n22 Singleton, K., 127, 140, 140n16 Slifman, Larry, 13n1, 452 Solomons, D., 421 Solow, Robert M., 11, 12, 15, 18n7, 78, 238, 238n1 Sougiannis, T., 74n1 Spletzer, James R., 157, 162, 200 Stern, Scott, 212 Stevens, David, 162, 164, 165 Stiroh, Kevin J., 7, 13n1, 301n52, 325, 364, 403n1, 403n2, 420, 456 Stockton, David J., 13 Swan, T. W., 18n7 Taber, Christopher, 43 Telser, L. G., 112 Tiao, G. C., 145 Tomer, J. F., 76 Trajtenberg, Manuel, 302n56, 333 Triplett, Jack E., 13n1, 382n16, 404n3,
Author Index 450n26, 460, 473, 482n7, 487n15, 502– 3n38, 546 Van Reenen, John, 209, 211, 212, 215t, 217, 218 Varian, Hal, 2 Veronesi, P., 125, 129, 136, 136n15 Visscher, Michael, 43, 50n3, 76 Vissing-Jorgensen, A., 152 Vuoteenaho, T., 125 Waldfogel, Joel, 256n26 Wallman, Steven M. H., 13n2, 205 Walras, L., 480n2, 484n11, 489, 490 Wasshausen, David B., 7, 376n11, 382n17 Weil, David, 209
585
Westphal, L. E., 75, 76 White, Alan G., 554n18 Wilcox, David W., 13 Wilson, Dan, 256 Wood, D. O., 490n23, 493n26, 537n67 Wykoff, Frank, 62, 487n16, 516, 516n54, 516n56, 517n57, 539n78, 544n3, 552n14 Yang, Shinkyu, 14, 26, 59, 60, 60n11, 159, 567, 568, 569, 572, 573 Young, Allan H., 31 Zha, T., 145, 145n22 Zin, S. E., 138
Subject Index
Adjustment cost model, 113–20 Aggregate value: growth-accounting decomposition of, 405; growth in, 406t Aggregation: methodology for, 456–60; over vintages of capital goods, 502–5; results for, 460–67 Alliances, 565 Analog modems, 335–36 Annual Capital Expenditures Survey (ACES), 368–71 Annual Survey of Manufacturers (ASM), 216 Asset price change, empirical determination of rates of, 490–94 Assets. See Intangible assets; Tangible assets Average labor productivity (ALP), 407–8 Book equity to market equity (BE-ME), 120–25, 143–44 Brand development, 28 Brand names, value of, 28–30 Cable modems, 336 Capital: aggregation over vintages of, 502– 5; calculating intensity of, 191–92; in complete model of economic growth, 16–19; fundamental equations relating stocks and flows of, 480–87; intertemporal framework for defining and measuring, 19–20; sources of growth model and, 16–19
Capital gains, 560 Capital input, measuring, 420; data for, 426–27; issues in, 427–30; methodology for, 423–26; notation for, 423; results for, 430–35 Capital investment, components to be included in measures of, 3–4 Cell phones, 331 College workers, 405–6. See also Knowledge workers; Labor Communications equipment, 6–7; classifying, 327–29; defined, 326–27; fiber optics, 339–49; hedonic price indexes for, 383–85; investment and multifactor productivity and, 358–60; overview of, 323–26; PBX/KTS equipment, 336–39; previous research on prices of, 331–34; price indexes for, 349–58, 376–77, 378t; prices of modems, 334–36, 337f, 338f; spending on, 329–31 Compressed Mortality File (CMF), 259–61 Compustat database, 163, 192–96 Computerized information, identifying and estimating business spending on, 23–26 Computers and peripheral equipment, 377t; hedonic price indexes for, 381–82; price indexes for, 376 Congressional Budget Office, 403n1 Consumer price index (CPI), accuracy of, 13 Council of Economic Advisors, 403n1
587
588
Subject Index
Creative property. See Scientific and creative property Cross-section depreciation, defined, 487–90 Current Population Survey (CPS), 155 Database software, 562 Declining balance depreciation model, 513–21; compared to other models of depreciation, 526–33 Dell, 50–51, 53–54, 560 Dense wave division multiplexing (DWDM), 342, 346–47 Depreciation: comparison of models for, 526–33; cross-section, defined, 487–90; declining balance model of, 513–21; linear decline in efficiency model of, 521–26; obsolescence and, 494–502; one-hoss-shay model of, 506–10; straight-line model of, 510–13 Deterioration, 496 Digital cross-connects (DCCs), 347, 348t Dividend risk, 125–28; measuring, 128–36; results with additional aggregate shock, 136–41 Drug development, 240–45 eBay, 560, 561 E-capital, 558 Economic Censuses, 165–66 Economic competencies, 28–29 Economic growth: capital in complete model of, 16–19; contribution of labor in, 405–6 Economic performance, analyses of, 237 Efficiency, one-hoss-shay model of, 506–10 Employee voice, 207; measurement issues for, 221, 223, 227 Enron, 563 Equity valuation, organization capital and, 90–96 Exhaustion, 496 Fiber optics, 339–49 Fiber rings, 342, 343f Firm, market value of, 160–61 Food and Drug Administration (FDA), drug approvals by, 240–44 Function points, improved price indexes for, 395 Generalized method of moments (GMM), 54, 58–59, 64–66, 67–69
Geometric depreciation model, 513–21 Global Crossing, 563 Growth accounting, production function approach to, 15–16 Growth trends: “sources of growth” model for, 11–12; understanding, 12–13 Hedonic price indexes: for informationprocessing equipment and software, 381–85; vs. traditional matched-model price indexes, 385–86. See also Price indexes Hewlett-Packard, 50, 53–54 High-tech capital: measurement issues of, 6; organizational and human capital and, 2–3 High-technology investments, 1; measuring, 2 Human capital, 2, 5, 157–59; comparison of new and traditional measures of, 168– 70; conceptual and measurement issues for, 155–57; construction of firm-level measures for, 171–72; construction of new measures for, 170–71; data sources for, 163–66; econometric and interpretation issues for, 161–63; estimates for, 167–72; high-tech capital and, 2–3; by industry, 172; intangibles and, 153–54; relation between market value and, 180–84; relation between productivity and, at micro level, 172–80; summary statistics on, by industry, 172t Human resource practices, 2 IBM, 84–86 Industry productivity, measuring, 448; methodology for, 448–50; results for, 450–56 Information-processing (IP) equipment and software: challenges of estimating private fixed investment in, 366–67; components of, 364t; current-dollar estimates for, 368–75; hedonic price indexes for, 381–85; improved currentdollar estimates for, 394; improved price indexes for, 394–96; investment in, movements in real GDP and, 365– 66; national income and product accounts and, 363–65; producer price indexes (PPIs) for, 376–81; real estimates for, 375–76 Information technology (IT), 48, 49; declin-
Subject Index ing prices in, 403–4; framework for analyzing impact of, 405; key industries in, 404; mechanisms for diffusion of advances in, 404; organization capital and, 87–90, 211–12 Inputs, intermediate, measuring: data for, 413–15; issues in, 415–16; methodology for, 411–13; notations for, 410–11; results for, 416–20, 421t Instruments, price indexes for, 377–78, 379t Intangible assets, 1, 111–12, 157–59, 558– 59, 562–63; capitalizing, 16; categories of, 205–6; classifying spending on, as fixed investment, 30–35; defining and measuring, 159–60; direct measurement of value of, 573–74; econometric and interpretation issues for, 161–63; identifying and estimating business spending on, 22–30; implications of full recognition of, 37; measuring, 2, 569–70; need for conceptual approaches for, 566; production function and, 20– 22; reporting of, 565–66; revealing, through indirect measurement, 570–72; rise of stock market and, 560–61; treatment of, 533–37; vulnerability of, 564. See also Tangible assets Intangible capital: conceptual and measurement issues for, 4; data for empirical valuation equation for, 60–64; empirical results for empirical valuation equation for, 64–69; empirical valuation equation for, 51–54; estimating empirical valuation equation for, 54– 60; measuring return on, 47–49; types of, 49; using financial market date to measure, 4–5; valuation of, 49–54 Intangible capital stock, computing, 117–20 Intellectual capital, 2 Intellectual property (IP), 48; defined, 49–50 Interest rates, empirical determination of, 490–94 Investigational new drugs (INDs), 240–41 Investment, adjustment cost model of, 113–20 Knowledge capital, 238–39. See also Research and development (R&D) Knowledge workers, 404. See also Labor Labor, contribution of, in economic growth, 405–6
589
Labor input, measuring, 436–37; data for, 438–42; issues in, 442–43; notation for, 437–38; results for, 444–48 Labor productivity: calculating, 191–92; growth rates for, by industry and subperiod, 456, 457 Lightbulb model of depreciation. See Onehoss-shay model Linear efficiency decline model, of depreciation, 521–26; compared to other models of depreciation, 526–33 Local area network (LAN) equipment, 332, 332n8; hedonic price index for, 384–85 Longevity, of Americans: R&D by pharmaceutical companies and, 240–53; study of, empirical results for, 261–67; study of, measuring changes in age distribution of, by cause of death, 259–61; study of, theoretical model for, 253–55. See also Pharmaceutical industry; Research and development (R&D) Longitudinal Employer-Household Dynamics (LEHD), 155, 164–65, 185–91 Longitudinal Research Database (LRD), 216–17 Market value, relation between human capital and, 180–84 Matched-model price indexes, vs. hedonic price indexes, 385–86 Medical devices, FDA approval and, 256– 59 Modems, 334–36 Multifactor productivity (MFP), 51, 59; communications equipment and, 358– 60 Multiplexers, 344–46 National Drug Data File, 248–50, 248n14 National Employers Survey (NES), 216 National income and product accounts (NIPAs), 4; adjusted, 285–90; components of, 277–80; computer software and, 23–26; current treatment of R&D in, 276–81; information-processing (IP) equipment and software and, 363–65; intangible spending and, 31; methodological changes in annual revision of, 389–90; plans for improving, 390–93; R&D and, 275–76; software and, 7. See also Informationprocessing equipment and software
590
Subject Index
Network externalities, 2 New drug applications (NDAs), 241; examples of approvals of, 252t; FDA data on approvals for, 250, 251t New economy, 1; human capital and, 153– 54; organizational capital and, 208–12; understanding growth in, 11–13 New molecular entities (NMEs), 241 NIPAs. See National income and product accounts (NIPAs) Obsolescence, depreciation and, 494–502 Office and accounting equipment, 380, 381t One-hoss-shay model, of efficiency and depreciation, 506–10; compared to other models of depreciation, 526–33 Organizational capital, 2, 5–6, 48; defined, 49–51; high-tech capital and, 2–3; valuation of, 4 Organizational change and development, spending on, 29 Organization capital: defined, 75–77, 205–8; equity valuation and, 90–96; firmspecific estimates of annual contribution of, 83–84; firm-specific examples of, 84–87; information technology and, 87–90; measurement issues for, 218–31; measuring, 212–18; methodology for estimating, 77–83; new economy and, 208–12; overview of, 73–75 Outputs, measuring industry: data for, 413– 15; issues in, 415–16; methodology for, 411–13; notations for, 410–11; results for, 416–20 Own-account software, improved price indexes for, 394–95 PBX/KTS equipment, 336–39 Pharmaceutical industry, 237; aggregate expenditures on R&D by, 245, 246t; R&D in, 6; R&D investment by, longevity of Americans and, 240–53. See also Longevity, of Americans Photocopy and related equipment: price indexes for, 378–80, 380t; improved price indexes for, 395–96 Portfolios, returns on, 120–25 Price indexes: for communications equipment, 349–58, 376–77, 378t; for information-processing equipment and software, 394–96; function points for, 395; for own-account software, 394–95; for
photocopy and related equipment, 395–96; for software, 376, 377t. See also Hedonic price indexes Producer price indexes (PPIs), for information-processing equipment and software, 376–81 Production functions, 237–38 Production possibility frontiers, 405; defined, 456–58 Productivity, 157–59; calculating, 191–92; organization capital and, 209; relation between human capital and, at micro level, 172–80 Progress, technical, kinds of, 255 q theory, 557–58, 569 R&D. See Research and development (R&D) Relations Professionals et Negotiations d’Entreprise (REPONSE), 218 Rents, 557–58, 560 Research and development (R&D), 2, 237; aggregate expenditures on, by pharmaceutical industry, 245, 246t; alternative scenarios for, 301–7; capitalizing, 282– 85; economic growth and, 239; effects of proposed changes in estimates of, 293–300; Enron, Winstar, and Global Crossing and, 563–64; estimated rates of return to private, 279; estimating spending on, 26–28; new product development and, 27–28; 1995–2000 compared with 1961–66, 307–8; in NIPAs, 6, 275–76; noisiness of information on, 564–66; nonscientific, 27; patenting and, 246–48; in pharmaceutical industry, 6; proposed treatment of, 281–92; technical appendix for construction of estimates of, 309–13. See also Knowledge capital; National income and product accounts (NIPAs) Returns on investments, 120–25 Sales, general, and administration (SGA) expenses, 78–83 Scarcity rents, 558 Scientific and creative property, 26–28 Software, 2; database, 562; hedonic price indexes for, 381–83; improvements in estimates of purchased, 386–88; NIPAs and, 7; price indexes for, 376, 377t
Subject Index
591
Sources of growth model, 11–12; capital and, 16–19 Standard Statistical Establishment List (SSEL), 216 Stocks: fundamental equations relating flows of capital and, 480–87; rents and, 557–58 Straight-line depreciation model, 510–13; compared to other models of depreciation, 526–33 Survey of Employer-Provided Training (SEPT), 215–16 Symmetry principle, 20
238–39, 406–9; methodology for measuring, at industry level, 448–50; results for industry contributions to, 460–67; results of measuring, 450–56 Total quality management (TQM), 227 Training, 2, 28–29, 206–7, 222; measurement issues for, 221, 223–27
Tangible assets, 157–59; determining average age of retirement for, 505; econometric and interpretation issues for, 161–63. See also Intangible assets Technical progress, kinds of, 255 Telephone switches, 331–32 Telephone switching equipment, hedonic price index for, 383–84 Total factor productivity (TFP), 12, 78,
Wal-Mart, 560 Winstar, 563 Work design, 207–8 Workplace Industrial Relations Survey (WIRS), 217–18
Value-added growth, measuring: data for, 413–15; issues in, 415–16; methodology for, 411–13; notations for, 410–11; results for, 416–20, 422t Vintage effects, of capital, 502–5
Xerox Corporation, 86–87 Yahoo!, 561